Dec 16 04:08:57.409021 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:18:19 -00 2025 Dec 16 04:08:57.409076 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 04:08:57.409091 kernel: BIOS-provided physical RAM map: Dec 16 04:08:57.409102 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 04:08:57.409130 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 04:08:57.409154 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 04:08:57.409166 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 16 04:08:57.409183 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 16 04:08:57.409196 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 04:08:57.409207 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 04:08:57.409218 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 04:08:57.409229 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 04:08:57.409239 kernel: NX (Execute Disable) protection: active Dec 16 04:08:57.409266 kernel: APIC: Static calls initialized Dec 16 04:08:57.409280 kernel: SMBIOS 2.8 present. Dec 16 04:08:57.409292 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 16 04:08:57.409309 kernel: DMI: Memory slots populated: 1/1 Dec 16 04:08:57.409335 kernel: Hypervisor detected: KVM Dec 16 04:08:57.409348 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 16 04:08:57.409360 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 04:08:57.409372 kernel: kvm-clock: using sched offset of 5911768083 cycles Dec 16 04:08:57.409400 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 04:08:57.409413 kernel: tsc: Detected 2799.998 MHz processor Dec 16 04:08:57.409426 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 04:08:57.409438 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 04:08:57.409466 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 16 04:08:57.409480 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 04:08:57.409492 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 04:08:57.409504 kernel: Using GB pages for direct mapping Dec 16 04:08:57.409516 kernel: ACPI: Early table checksum verification disabled Dec 16 04:08:57.409528 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 16 04:08:57.409541 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 04:08:57.409553 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 04:08:57.409580 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 04:08:57.409593 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 16 04:08:57.409605 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 04:08:57.409618 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 04:08:57.409630 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 04:08:57.409642 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 04:08:57.409654 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 16 04:08:57.409691 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 16 04:08:57.409705 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 16 04:08:57.409717 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 16 04:08:57.409730 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 16 04:08:57.409757 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 16 04:08:57.409770 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 16 04:08:57.409782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 04:08:57.409795 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 16 04:08:57.409808 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 16 04:08:57.409821 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Dec 16 04:08:57.409834 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Dec 16 04:08:57.409861 kernel: Zone ranges: Dec 16 04:08:57.409875 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 04:08:57.409887 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 16 04:08:57.409900 kernel: Normal empty Dec 16 04:08:57.409913 kernel: Device empty Dec 16 04:08:57.409925 kernel: Movable zone start for each node Dec 16 04:08:57.409938 kernel: Early memory node ranges Dec 16 04:08:57.409950 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 04:08:57.409977 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 16 04:08:57.409991 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 16 04:08:57.410003 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 04:08:57.410016 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 04:08:57.410034 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 16 04:08:57.410048 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 04:08:57.410065 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 04:08:57.410091 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 04:08:57.410104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 04:08:57.410117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 04:08:57.410130 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 04:08:57.410153 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 04:08:57.410165 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 04:08:57.410178 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 04:08:57.410205 kernel: TSC deadline timer available Dec 16 04:08:57.410219 kernel: CPU topo: Max. logical packages: 16 Dec 16 04:08:57.410232 kernel: CPU topo: Max. logical dies: 16 Dec 16 04:08:57.410245 kernel: CPU topo: Max. dies per package: 1 Dec 16 04:08:57.410257 kernel: CPU topo: Max. threads per core: 1 Dec 16 04:08:57.410270 kernel: CPU topo: Num. cores per package: 1 Dec 16 04:08:57.410282 kernel: CPU topo: Num. threads per package: 1 Dec 16 04:08:57.410295 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Dec 16 04:08:57.410321 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 04:08:57.410335 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 04:08:57.410348 kernel: Booting paravirtualized kernel on KVM Dec 16 04:08:57.410361 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 04:08:57.412996 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 16 04:08:57.413029 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Dec 16 04:08:57.413044 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Dec 16 04:08:57.413082 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 16 04:08:57.413096 kernel: kvm-guest: PV spinlocks enabled Dec 16 04:08:57.413109 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 04:08:57.413123 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 04:08:57.413146 kernel: random: crng init done Dec 16 04:08:57.413160 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 04:08:57.413173 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 04:08:57.413202 kernel: Fallback order for Node 0: 0 Dec 16 04:08:57.413215 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Dec 16 04:08:57.413229 kernel: Policy zone: DMA32 Dec 16 04:08:57.413241 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 04:08:57.413254 kernel: software IO TLB: area num 16. Dec 16 04:08:57.413267 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 16 04:08:57.413280 kernel: Kernel/User page tables isolation: enabled Dec 16 04:08:57.413307 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 04:08:57.413321 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 04:08:57.413334 kernel: Dynamic Preempt: voluntary Dec 16 04:08:57.413346 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 04:08:57.413360 kernel: rcu: RCU event tracing is enabled. Dec 16 04:08:57.413398 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 16 04:08:57.413414 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 04:08:57.413448 kernel: Rude variant of Tasks RCU enabled. Dec 16 04:08:57.413462 kernel: Tracing variant of Tasks RCU enabled. Dec 16 04:08:57.413476 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 04:08:57.413489 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 16 04:08:57.413502 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 04:08:57.413515 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 04:08:57.413528 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 04:08:57.413555 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 16 04:08:57.413569 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 04:08:57.413610 kernel: Console: colour VGA+ 80x25 Dec 16 04:08:57.413639 kernel: printk: legacy console [tty0] enabled Dec 16 04:08:57.413652 kernel: printk: legacy console [ttyS0] enabled Dec 16 04:08:57.413671 kernel: ACPI: Core revision 20240827 Dec 16 04:08:57.413686 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 04:08:57.413699 kernel: x2apic enabled Dec 16 04:08:57.413713 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 04:08:57.413739 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Dec 16 04:08:57.413754 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Dec 16 04:08:57.413768 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 04:08:57.413781 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 04:08:57.413807 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 04:08:57.413821 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 04:08:57.413835 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 04:08:57.413848 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 04:08:57.413861 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 16 04:08:57.413874 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 04:08:57.413887 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 04:08:57.413900 kernel: MDS: Mitigation: Clear CPU buffers Dec 16 04:08:57.413913 kernel: MMIO Stale Data: Unknown: No mitigations Dec 16 04:08:57.413926 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 16 04:08:57.413939 kernel: active return thunk: its_return_thunk Dec 16 04:08:57.413966 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 04:08:57.413980 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 04:08:57.413993 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 04:08:57.414006 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 04:08:57.414020 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 04:08:57.414033 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 16 04:08:57.414046 kernel: Freeing SMP alternatives memory: 32K Dec 16 04:08:57.414059 kernel: pid_max: default: 32768 minimum: 301 Dec 16 04:08:57.414072 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 04:08:57.414085 kernel: landlock: Up and running. Dec 16 04:08:57.414112 kernel: SELinux: Initializing. Dec 16 04:08:57.414126 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 04:08:57.414148 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 04:08:57.414162 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 16 04:08:57.414175 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 16 04:08:57.414188 kernel: signal: max sigframe size: 1776 Dec 16 04:08:57.414207 kernel: rcu: Hierarchical SRCU implementation. Dec 16 04:08:57.414222 kernel: rcu: Max phase no-delay instances is 400. Dec 16 04:08:57.414248 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Dec 16 04:08:57.414263 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 04:08:57.414277 kernel: smp: Bringing up secondary CPUs ... Dec 16 04:08:57.414290 kernel: smpboot: x86: Booting SMP configuration: Dec 16 04:08:57.414303 kernel: .... node #0, CPUs: #1 Dec 16 04:08:57.414317 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 04:08:57.414330 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Dec 16 04:08:57.414344 kernel: Memory: 1912060K/2096616K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15556K init, 2484K bss, 178540K reserved, 0K cma-reserved) Dec 16 04:08:57.414372 kernel: devtmpfs: initialized Dec 16 04:08:57.414400 kernel: x86/mm: Memory block size: 128MB Dec 16 04:08:57.414424 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 04:08:57.414438 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 16 04:08:57.414452 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 04:08:57.414465 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 04:08:57.414478 kernel: audit: initializing netlink subsys (disabled) Dec 16 04:08:57.414509 kernel: audit: type=2000 audit(1765858132.668:1): state=initialized audit_enabled=0 res=1 Dec 16 04:08:57.414523 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 04:08:57.414536 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 04:08:57.414550 kernel: cpuidle: using governor menu Dec 16 04:08:57.414563 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 04:08:57.414577 kernel: dca service started, version 1.12.1 Dec 16 04:08:57.414591 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 16 04:08:57.414619 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 16 04:08:57.414632 kernel: PCI: Using configuration type 1 for base access Dec 16 04:08:57.414646 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 04:08:57.414660 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 04:08:57.414673 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 04:08:57.414686 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 04:08:57.414699 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 04:08:57.414728 kernel: ACPI: Added _OSI(Module Device) Dec 16 04:08:57.414742 kernel: ACPI: Added _OSI(Processor Device) Dec 16 04:08:57.414755 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 04:08:57.414769 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 04:08:57.414782 kernel: ACPI: Interpreter enabled Dec 16 04:08:57.414795 kernel: ACPI: PM: (supports S0 S5) Dec 16 04:08:57.414808 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 04:08:57.414836 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 04:08:57.414850 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 04:08:57.414863 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 04:08:57.414877 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 04:08:57.415249 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 04:08:57.415536 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 04:08:57.415790 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 04:08:57.415812 kernel: PCI host bridge to bus 0000:00 Dec 16 04:08:57.416039 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 04:08:57.416261 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 04:08:57.416489 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 04:08:57.416695 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 16 04:08:57.416921 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 04:08:57.417126 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 16 04:08:57.417344 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 04:08:57.422651 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 04:08:57.422905 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Dec 16 04:08:57.423172 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Dec 16 04:08:57.423418 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Dec 16 04:08:57.423657 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Dec 16 04:08:57.423881 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 04:08:57.424115 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 04:08:57.424352 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Dec 16 04:08:57.426661 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 04:08:57.426890 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 16 04:08:57.427115 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 04:08:57.427367 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 04:08:57.428637 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Dec 16 04:08:57.428866 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 04:08:57.429132 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 04:08:57.429370 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 04:08:57.430715 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 04:08:57.430942 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Dec 16 04:08:57.431184 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 04:08:57.432236 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 04:08:57.432493 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 04:08:57.432732 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 04:08:57.432956 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Dec 16 04:08:57.433196 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 04:08:57.435481 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 04:08:57.435764 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 04:08:57.436008 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 04:08:57.436253 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Dec 16 04:08:57.437538 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 04:08:57.437772 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 04:08:57.438004 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 04:08:57.438296 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 04:08:57.439557 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Dec 16 04:08:57.439792 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 04:08:57.440019 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 04:08:57.440259 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 04:08:57.440520 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 04:08:57.440772 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Dec 16 04:08:57.440995 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 04:08:57.441233 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 04:08:57.441491 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 04:08:57.441727 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 04:08:57.441974 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Dec 16 04:08:57.442211 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 04:08:57.442540 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 04:08:57.442768 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 04:08:57.443007 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 04:08:57.443245 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Dec 16 04:08:57.443525 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Dec 16 04:08:57.443750 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Dec 16 04:08:57.443969 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Dec 16 04:08:57.444217 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 04:08:57.444462 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Dec 16 04:08:57.444687 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Dec 16 04:08:57.444933 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Dec 16 04:08:57.445179 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 04:08:57.445433 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 04:08:57.445685 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 04:08:57.445907 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Dec 16 04:08:57.446166 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Dec 16 04:08:57.446420 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 04:08:57.446644 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 16 04:08:57.446889 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Dec 16 04:08:57.447114 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Dec 16 04:08:57.447351 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 04:08:57.447648 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 04:08:57.447872 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 04:08:57.448115 kernel: pci_bus 0000:02: extended config space not accessible Dec 16 04:08:57.448371 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Dec 16 04:08:57.448626 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Dec 16 04:08:57.448877 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 04:08:57.449117 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 16 04:08:57.449361 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Dec 16 04:08:57.449617 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 04:08:57.449866 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 16 04:08:57.450095 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Dec 16 04:08:57.450354 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 04:08:57.450599 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 04:08:57.450823 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 04:08:57.451046 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 04:08:57.451284 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 04:08:57.451542 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 04:08:57.451583 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 04:08:57.451598 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 04:08:57.451612 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 04:08:57.451626 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 04:08:57.451639 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 04:08:57.451653 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 04:08:57.451666 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 04:08:57.451695 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 04:08:57.451709 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 04:08:57.451723 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 04:08:57.451737 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 04:08:57.451750 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 04:08:57.451764 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 04:08:57.451777 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 04:08:57.451805 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 04:08:57.451819 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 04:08:57.451832 kernel: iommu: Default domain type: Translated Dec 16 04:08:57.451846 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 04:08:57.451859 kernel: PCI: Using ACPI for IRQ routing Dec 16 04:08:57.451873 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 04:08:57.451886 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 04:08:57.451914 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 16 04:08:57.452146 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 04:08:57.452371 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 04:08:57.452612 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 04:08:57.452633 kernel: vgaarb: loaded Dec 16 04:08:57.452648 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 04:08:57.452683 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 04:08:57.452699 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 04:08:57.452713 kernel: pnp: PnP ACPI init Dec 16 04:08:57.452960 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 04:08:57.452982 kernel: pnp: PnP ACPI: found 5 devices Dec 16 04:08:57.452997 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 04:08:57.453011 kernel: NET: Registered PF_INET protocol family Dec 16 04:08:57.453044 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 04:08:57.453059 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 04:08:57.453073 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 04:08:57.453086 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 04:08:57.453100 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 04:08:57.453114 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 04:08:57.453128 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 04:08:57.453166 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 04:08:57.453181 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 04:08:57.453194 kernel: NET: Registered PF_XDP protocol family Dec 16 04:08:57.453438 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 16 04:08:57.453665 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 16 04:08:57.453889 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 16 04:08:57.454113 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 16 04:08:57.454367 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 16 04:08:57.454632 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 04:08:57.454857 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 04:08:57.455080 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 04:08:57.455314 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Dec 16 04:08:57.455559 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Dec 16 04:08:57.455804 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Dec 16 04:08:57.456025 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Dec 16 04:08:57.456260 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Dec 16 04:08:57.456514 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Dec 16 04:08:57.456737 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Dec 16 04:08:57.456958 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Dec 16 04:08:57.457226 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 04:08:57.457558 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 04:08:57.457782 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 04:08:57.458002 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 16 04:08:57.458237 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 16 04:08:57.458496 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 04:08:57.458720 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 04:08:57.458940 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 16 04:08:57.459203 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 04:08:57.459447 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 04:08:57.459672 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 04:08:57.459893 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 16 04:08:57.460112 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 04:08:57.460364 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 04:08:57.460620 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 04:08:57.460842 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 16 04:08:57.461061 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 04:08:57.461295 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 04:08:57.461562 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 04:08:57.461786 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 16 04:08:57.462007 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 04:08:57.462242 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 04:08:57.462498 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 04:08:57.462719 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 16 04:08:57.462962 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 04:08:57.463199 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 04:08:57.463444 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 04:08:57.463667 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 16 04:08:57.463888 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 04:08:57.464108 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 04:08:57.464343 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 04:08:57.464613 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 16 04:08:57.464835 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 04:08:57.465054 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 04:08:57.465281 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 04:08:57.465509 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 04:08:57.465723 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 04:08:57.465927 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 16 04:08:57.466167 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 04:08:57.466407 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 16 04:08:57.466639 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 16 04:08:57.466851 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 16 04:08:57.467060 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 04:08:57.467320 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 16 04:08:57.467572 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 16 04:08:57.467784 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 16 04:08:57.467993 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 04:08:57.468228 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 16 04:08:57.468470 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 16 04:08:57.468706 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 04:08:57.468927 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 16 04:08:57.469149 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 16 04:08:57.469363 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 04:08:57.469603 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 16 04:08:57.469814 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 16 04:08:57.470047 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 04:08:57.470282 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 16 04:08:57.470515 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 16 04:08:57.470725 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 04:08:57.470956 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 16 04:08:57.471201 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 16 04:08:57.471429 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 04:08:57.471652 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 16 04:08:57.471884 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 16 04:08:57.472094 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 04:08:57.472117 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 04:08:57.472161 kernel: PCI: CLS 0 bytes, default 64 Dec 16 04:08:57.472177 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 04:08:57.472192 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 16 04:08:57.472206 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 04:08:57.472220 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Dec 16 04:08:57.472235 kernel: Initialise system trusted keyrings Dec 16 04:08:57.472250 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 04:08:57.472281 kernel: Key type asymmetric registered Dec 16 04:08:57.472295 kernel: Asymmetric key parser 'x509' registered Dec 16 04:08:57.472309 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 04:08:57.472323 kernel: io scheduler mq-deadline registered Dec 16 04:08:57.472337 kernel: io scheduler kyber registered Dec 16 04:08:57.472351 kernel: io scheduler bfq registered Dec 16 04:08:57.472640 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 16 04:08:57.472887 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 16 04:08:57.473130 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 04:08:57.473391 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 16 04:08:57.473617 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 16 04:08:57.473839 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 04:08:57.474086 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 16 04:08:57.474323 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 16 04:08:57.474563 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 04:08:57.474788 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 16 04:08:57.475010 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 16 04:08:57.475266 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 04:08:57.475507 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 16 04:08:57.475731 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 16 04:08:57.475953 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 04:08:57.476212 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 16 04:08:57.476467 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 16 04:08:57.476714 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 04:08:57.476938 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 16 04:08:57.478018 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 16 04:08:57.478283 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 04:08:57.478570 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 16 04:08:57.478800 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 16 04:08:57.479044 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 04:08:57.479068 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 04:08:57.479085 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 04:08:57.479100 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 04:08:57.479143 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 04:08:57.479161 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 04:08:57.479176 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 04:08:57.479191 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 04:08:57.479205 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 04:08:57.480844 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 04:08:57.480872 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 04:08:57.481122 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 04:08:57.481353 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T04:08:55 UTC (1765858135) Dec 16 04:08:57.481594 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 16 04:08:57.481616 kernel: intel_pstate: CPU model not supported Dec 16 04:08:57.481631 kernel: NET: Registered PF_INET6 protocol family Dec 16 04:08:57.481645 kernel: Segment Routing with IPv6 Dec 16 04:08:57.481681 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 04:08:57.481709 kernel: NET: Registered PF_PACKET protocol family Dec 16 04:08:57.481724 kernel: Key type dns_resolver registered Dec 16 04:08:57.481751 kernel: IPI shorthand broadcast: enabled Dec 16 04:08:57.481767 kernel: sched_clock: Marking stable (2350004105, 220650149)->(2714761226, -144106972) Dec 16 04:08:57.481781 kernel: registered taskstats version 1 Dec 16 04:08:57.481796 kernel: Loading compiled-in X.509 certificates Dec 16 04:08:57.481810 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: aafd1eb27ea805b8231c3bede9210239fae84df8' Dec 16 04:08:57.481838 kernel: Demotion targets for Node 0: null Dec 16 04:08:57.481853 kernel: Key type .fscrypt registered Dec 16 04:08:57.481867 kernel: Key type fscrypt-provisioning registered Dec 16 04:08:57.481881 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 04:08:57.481895 kernel: ima: Allocated hash algorithm: sha1 Dec 16 04:08:57.481909 kernel: ima: No architecture policies found Dec 16 04:08:57.481924 kernel: clk: Disabling unused clocks Dec 16 04:08:57.481951 kernel: Freeing unused kernel image (initmem) memory: 15556K Dec 16 04:08:57.481967 kernel: Write protecting the kernel read-only data: 47104k Dec 16 04:08:57.481981 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 16 04:08:57.481995 kernel: Run /init as init process Dec 16 04:08:57.482009 kernel: with arguments: Dec 16 04:08:57.482024 kernel: /init Dec 16 04:08:57.482037 kernel: with environment: Dec 16 04:08:57.482064 kernel: HOME=/ Dec 16 04:08:57.482079 kernel: TERM=linux Dec 16 04:08:57.482093 kernel: ACPI: bus type USB registered Dec 16 04:08:57.482108 kernel: usbcore: registered new interface driver usbfs Dec 16 04:08:57.482122 kernel: usbcore: registered new interface driver hub Dec 16 04:08:57.482146 kernel: usbcore: registered new device driver usb Dec 16 04:08:57.482400 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 16 04:08:57.482634 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 16 04:08:57.482887 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 04:08:57.483116 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 16 04:08:57.483357 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 16 04:08:57.483601 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 16 04:08:57.483872 kernel: hub 1-0:1.0: USB hub found Dec 16 04:08:57.484115 kernel: hub 1-0:1.0: 4 ports detected Dec 16 04:08:57.484446 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 04:08:57.484713 kernel: hub 2-0:1.0: USB hub found Dec 16 04:08:57.484955 kernel: hub 2-0:1.0: 4 ports detected Dec 16 04:08:57.484977 kernel: SCSI subsystem initialized Dec 16 04:08:57.484993 kernel: libata version 3.00 loaded. Dec 16 04:08:57.485256 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 04:08:57.485280 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 04:08:57.485518 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 04:08:57.486558 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 04:08:57.486786 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 04:08:57.487034 kernel: scsi host0: ahci Dec 16 04:08:57.487316 kernel: scsi host1: ahci Dec 16 04:08:57.489306 kernel: scsi host2: ahci Dec 16 04:08:57.489604 kernel: scsi host3: ahci Dec 16 04:08:57.489851 kernel: scsi host4: ahci Dec 16 04:08:57.490109 kernel: scsi host5: ahci Dec 16 04:08:57.490177 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Dec 16 04:08:57.490193 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Dec 16 04:08:57.490208 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Dec 16 04:08:57.490222 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Dec 16 04:08:57.490236 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Dec 16 04:08:57.490251 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Dec 16 04:08:57.490573 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 04:08:57.490599 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 04:08:57.490615 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 04:08:57.490629 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 04:08:57.490644 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 04:08:57.490658 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 04:08:57.490672 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 04:08:57.490707 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 04:08:57.490722 kernel: usbcore: registered new interface driver usbhid Dec 16 04:08:57.490737 kernel: usbhid: USB HID core driver Dec 16 04:08:57.490985 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 16 04:08:57.491227 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 16 04:08:57.491269 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 16 04:08:57.491587 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 16 04:08:57.491612 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 04:08:57.491626 kernel: GPT:25804799 != 125829119 Dec 16 04:08:57.491641 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 04:08:57.491655 kernel: GPT:25804799 != 125829119 Dec 16 04:08:57.491668 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 04:08:57.491702 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 04:08:57.491718 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 04:08:57.491733 kernel: device-mapper: uevent: version 1.0.3 Dec 16 04:08:57.491747 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 04:08:57.491762 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 04:08:57.491776 kernel: raid6: sse2x4 gen() 14604 MB/s Dec 16 04:08:57.491790 kernel: raid6: sse2x2 gen() 9883 MB/s Dec 16 04:08:57.491819 kernel: raid6: sse2x1 gen() 10231 MB/s Dec 16 04:08:57.491834 kernel: raid6: using algorithm sse2x4 gen() 14604 MB/s Dec 16 04:08:57.491848 kernel: raid6: .... xor() 8277 MB/s, rmw enabled Dec 16 04:08:57.491863 kernel: raid6: using ssse3x2 recovery algorithm Dec 16 04:08:57.491877 kernel: xor: automatically using best checksumming function avx Dec 16 04:08:57.491891 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 04:08:57.491906 kernel: BTRFS: device fsid 57a8262f-2900-48ba-a17e-aafbd70d59c7 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (193) Dec 16 04:08:57.491935 kernel: BTRFS info (device dm-0): first mount of filesystem 57a8262f-2900-48ba-a17e-aafbd70d59c7 Dec 16 04:08:57.491951 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 04:08:57.491966 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 04:08:57.491980 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 04:08:57.491995 kernel: loop: module loaded Dec 16 04:08:57.492009 kernel: loop0: detected capacity change from 0 to 100528 Dec 16 04:08:57.492023 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 04:08:57.492053 systemd[1]: Successfully made /usr/ read-only. Dec 16 04:08:57.492073 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 04:08:57.492089 systemd[1]: Detected virtualization kvm. Dec 16 04:08:57.492104 systemd[1]: Detected architecture x86-64. Dec 16 04:08:57.492118 systemd[1]: Running in initrd. Dec 16 04:08:57.492144 systemd[1]: No hostname configured, using default hostname. Dec 16 04:08:57.492177 systemd[1]: Hostname set to . Dec 16 04:08:57.492193 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 04:08:57.492207 systemd[1]: Queued start job for default target initrd.target. Dec 16 04:08:57.492222 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 04:08:57.492238 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 04:08:57.492253 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 04:08:57.492284 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 04:08:57.492301 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 04:08:57.492317 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 04:08:57.492332 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 04:08:57.492348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 04:08:57.492363 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 04:08:57.492407 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 04:08:57.492423 systemd[1]: Reached target paths.target - Path Units. Dec 16 04:08:57.492461 systemd[1]: Reached target slices.target - Slice Units. Dec 16 04:08:57.492480 systemd[1]: Reached target swap.target - Swaps. Dec 16 04:08:57.492495 systemd[1]: Reached target timers.target - Timer Units. Dec 16 04:08:57.492510 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 04:08:57.492525 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 04:08:57.492558 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 04:08:57.492574 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 04:08:57.492590 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 04:08:57.492605 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 04:08:57.492620 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 04:08:57.492635 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 04:08:57.492651 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 04:08:57.492682 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 04:08:57.492698 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 04:08:57.492714 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 04:08:57.492730 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 04:08:57.492746 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 04:08:57.492761 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 04:08:57.492792 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 04:08:57.492808 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 04:08:57.492824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 04:08:57.492840 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 04:08:57.492872 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 04:08:57.492889 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 04:08:57.492904 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 04:08:57.492966 systemd-journald[328]: Collecting audit messages is enabled. Dec 16 04:08:57.493018 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 04:08:57.493035 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 04:08:57.493049 kernel: Bridge firewalling registered Dec 16 04:08:57.493064 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 04:08:57.493080 kernel: audit: type=1130 audit(1765858137.477:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.493110 kernel: audit: type=1130 audit(1765858137.487:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.493126 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 04:08:57.493152 systemd-journald[328]: Journal started Dec 16 04:08:57.493178 systemd-journald[328]: Runtime Journal (/run/log/journal/d9363c7681e4421dabf0b9665370cd35) is 4.7M, max 37.7M, 33M free. Dec 16 04:08:57.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.438639 systemd-modules-load[331]: Inserted module 'br_netfilter' Dec 16 04:08:57.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.497398 kernel: audit: type=1130 audit(1765858137.495:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.497433 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 04:08:57.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.508401 kernel: audit: type=1130 audit(1765858137.502:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.508485 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 04:08:57.510176 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 04:08:57.514574 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 04:08:57.519512 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 04:08:57.544540 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 04:08:57.544830 systemd-tmpfiles[350]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 04:08:57.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.557439 kernel: audit: type=1130 audit(1765858137.545:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.557995 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 04:08:57.568365 kernel: audit: type=1130 audit(1765858137.558:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.568443 kernel: audit: type=1334 audit(1765858137.561:8): prog-id=6 op=LOAD Dec 16 04:08:57.561000 audit: BPF prog-id=6 op=LOAD Dec 16 04:08:57.566225 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 04:08:57.569839 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 04:08:57.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.576631 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 04:08:57.583863 kernel: audit: type=1130 audit(1765858137.571:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.583924 kernel: audit: type=1130 audit(1765858137.577:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.584050 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 04:08:57.614237 dracut-cmdline[370]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 04:08:57.642488 systemd-resolved[366]: Positive Trust Anchors: Dec 16 04:08:57.642509 systemd-resolved[366]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 04:08:57.642516 systemd-resolved[366]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 04:08:57.642558 systemd-resolved[366]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 04:08:57.677256 systemd-resolved[366]: Defaulting to hostname 'linux'. Dec 16 04:08:57.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.679634 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 04:08:57.680430 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 04:08:57.763485 kernel: Loading iSCSI transport class v2.0-870. Dec 16 04:08:57.786427 kernel: iscsi: registered transport (tcp) Dec 16 04:08:57.815515 kernel: iscsi: registered transport (qla4xxx) Dec 16 04:08:57.815602 kernel: QLogic iSCSI HBA Driver Dec 16 04:08:57.853693 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 04:08:57.898461 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 04:08:57.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.901385 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 04:08:57.969296 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 04:08:57.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:57.972955 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 04:08:57.976593 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 04:08:58.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:58.018119 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 04:08:58.020000 audit: BPF prog-id=7 op=LOAD Dec 16 04:08:58.021000 audit: BPF prog-id=8 op=LOAD Dec 16 04:08:58.022688 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 04:08:58.058479 systemd-udevd[614]: Using default interface naming scheme 'v257'. Dec 16 04:08:58.080452 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 04:08:58.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:58.085961 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 04:08:58.111915 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 04:08:58.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:58.114000 audit: BPF prog-id=9 op=LOAD Dec 16 04:08:58.116142 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 04:08:58.131786 dracut-pre-trigger[694]: rd.md=0: removing MD RAID activation Dec 16 04:08:58.168460 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 04:08:58.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:58.172684 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 04:08:58.180997 systemd-networkd[707]: lo: Link UP Dec 16 04:08:58.181013 systemd-networkd[707]: lo: Gained carrier Dec 16 04:08:58.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:58.182404 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 04:08:58.183447 systemd[1]: Reached target network.target - Network. Dec 16 04:08:58.333257 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 04:08:58.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:58.338606 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 04:08:58.482407 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 04:08:58.505658 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 04:08:58.529229 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 04:08:58.549084 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 04:08:58.552002 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 04:08:58.598402 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 04:08:58.609397 disk-uuid[775]: Primary Header is updated. Dec 16 04:08:58.609397 disk-uuid[775]: Secondary Entries is updated. Dec 16 04:08:58.609397 disk-uuid[775]: Secondary Header is updated. Dec 16 04:08:58.639575 kernel: AES CTR mode by8 optimization enabled Dec 16 04:08:58.646401 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 16 04:08:58.703463 systemd-networkd[707]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 04:08:58.703482 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 04:08:58.704291 systemd-networkd[707]: eth0: Link UP Dec 16 04:08:58.708556 systemd-networkd[707]: eth0: Gained carrier Dec 16 04:08:58.723284 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 16 04:08:58.723318 kernel: audit: type=1131 audit(1765858138.714:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:58.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:58.708572 systemd-networkd[707]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 04:08:58.711006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 04:08:58.711237 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 04:08:58.715318 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 04:08:58.731326 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 04:08:58.734502 systemd-networkd[707]: eth0: DHCPv4 address 10.230.69.46/30, gateway 10.230.69.45 acquired from 10.230.69.45 Dec 16 04:08:58.813169 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 04:08:58.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:58.855125 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 04:08:58.860767 kernel: audit: type=1130 audit(1765858138.853:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:58.860011 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 04:08:58.861516 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 04:08:58.864061 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 04:08:58.865240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 04:08:58.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:58.873401 kernel: audit: type=1130 audit(1765858138.867:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:58.899099 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 04:08:58.905343 kernel: audit: type=1130 audit(1765858138.899:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:58.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:59.714406 disk-uuid[776]: Warning: The kernel is still using the old partition table. Dec 16 04:08:59.714406 disk-uuid[776]: The new table will be used at the next reboot or after you Dec 16 04:08:59.714406 disk-uuid[776]: run partprobe(8) or kpartx(8) Dec 16 04:08:59.714406 disk-uuid[776]: The operation has completed successfully. Dec 16 04:08:59.722762 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 04:08:59.733244 kernel: audit: type=1130 audit(1765858139.723:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:59.733280 kernel: audit: type=1131 audit(1765858139.723:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:59.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:59.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:59.722967 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 04:08:59.726572 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 04:08:59.776414 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (861) Dec 16 04:08:59.780904 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 04:08:59.780939 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 04:08:59.785678 kernel: BTRFS info (device vda6): turning on async discard Dec 16 04:08:59.785734 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 04:08:59.802399 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 04:08:59.803607 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 04:08:59.809809 kernel: audit: type=1130 audit(1765858139.804:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:59.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:08:59.806242 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 04:08:59.967720 systemd-networkd[707]: eth0: Gained IPv6LL Dec 16 04:09:00.162105 ignition[880]: Ignition 2.24.0 Dec 16 04:09:00.162131 ignition[880]: Stage: fetch-offline Dec 16 04:09:00.162246 ignition[880]: no configs at "/usr/lib/ignition/base.d" Dec 16 04:09:00.164482 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 04:09:00.173505 kernel: audit: type=1130 audit(1765858140.166:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:00.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:00.162270 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 04:09:00.168515 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 04:09:00.162544 ignition[880]: parsed url from cmdline: "" Dec 16 04:09:00.162552 ignition[880]: no config URL provided Dec 16 04:09:00.162639 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 04:09:00.162661 ignition[880]: no config at "/usr/lib/ignition/user.ign" Dec 16 04:09:00.162670 ignition[880]: failed to fetch config: resource requires networking Dec 16 04:09:00.163078 ignition[880]: Ignition finished successfully Dec 16 04:09:00.235942 ignition[886]: Ignition 2.24.0 Dec 16 04:09:00.235962 ignition[886]: Stage: fetch Dec 16 04:09:00.236422 ignition[886]: no configs at "/usr/lib/ignition/base.d" Dec 16 04:09:00.236441 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 04:09:00.236694 ignition[886]: parsed url from cmdline: "" Dec 16 04:09:00.236702 ignition[886]: no config URL provided Dec 16 04:09:00.236716 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 04:09:00.236730 ignition[886]: no config at "/usr/lib/ignition/user.ign" Dec 16 04:09:00.237008 ignition[886]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 16 04:09:00.237038 ignition[886]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 16 04:09:00.237064 ignition[886]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 16 04:09:00.259916 ignition[886]: GET result: OK Dec 16 04:09:00.260914 ignition[886]: parsing config with SHA512: 44378143c91ace8801dd9d46c3e00157d7fba7458e258780dc9619464cdd55ef1b8e7b42add505c14cb16aeefb7aeada9be52b5fc85c594ca4b0c6e7b2ca296a Dec 16 04:09:00.269246 unknown[886]: fetched base config from "system" Dec 16 04:09:00.269266 unknown[886]: fetched base config from "system" Dec 16 04:09:00.270000 ignition[886]: fetch: fetch complete Dec 16 04:09:00.269275 unknown[886]: fetched user config from "openstack" Dec 16 04:09:00.270008 ignition[886]: fetch: fetch passed Dec 16 04:09:00.280265 kernel: audit: type=1130 audit(1765858140.273:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:00.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:00.273004 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 04:09:00.270071 ignition[886]: Ignition finished successfully Dec 16 04:09:00.275921 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 04:09:00.340724 ignition[893]: Ignition 2.24.0 Dec 16 04:09:00.340744 ignition[893]: Stage: kargs Dec 16 04:09:00.340981 ignition[893]: no configs at "/usr/lib/ignition/base.d" Dec 16 04:09:00.341000 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 04:09:00.343753 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 04:09:00.350303 kernel: audit: type=1130 audit(1765858140.344:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:00.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:00.341999 ignition[893]: kargs: kargs passed Dec 16 04:09:00.348553 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 04:09:00.342080 ignition[893]: Ignition finished successfully Dec 16 04:09:00.393738 ignition[899]: Ignition 2.24.0 Dec 16 04:09:00.393765 ignition[899]: Stage: disks Dec 16 04:09:00.394063 ignition[899]: no configs at "/usr/lib/ignition/base.d" Dec 16 04:09:00.394081 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 04:09:00.395984 ignition[899]: disks: disks passed Dec 16 04:09:00.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:00.397408 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 04:09:00.396054 ignition[899]: Ignition finished successfully Dec 16 04:09:00.398870 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 04:09:00.399643 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 04:09:00.401029 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 04:09:00.402281 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 04:09:00.403725 systemd[1]: Reached target basic.target - Basic System. Dec 16 04:09:00.406540 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 04:09:00.460715 systemd-fsck[907]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 04:09:00.464576 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 04:09:00.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:00.467710 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 04:09:00.613413 kernel: EXT4-fs (vda9): mounted filesystem 1314c107-11a5-486b-9d52-be9f57b6bf1b r/w with ordered data mode. Quota mode: none. Dec 16 04:09:00.614989 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 04:09:00.617053 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 04:09:00.620505 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 04:09:00.622884 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 04:09:00.625642 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 04:09:00.631584 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 16 04:09:00.634229 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 04:09:00.635361 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 04:09:00.639972 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 04:09:00.646402 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (915) Dec 16 04:09:00.647590 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 04:09:00.652691 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 04:09:00.652720 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 04:09:00.671417 kernel: BTRFS info (device vda6): turning on async discard Dec 16 04:09:00.671484 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 04:09:00.680820 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 04:09:00.757419 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 04:09:00.845099 systemd-networkd[707]: eth0: Ignoring DHCPv6 address 2a02:1348:179:914b:24:19ff:fee6:452e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:914b:24:19ff:fee6:452e/64 assigned by NDisc. Dec 16 04:09:00.845112 systemd-networkd[707]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 16 04:09:00.905325 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 04:09:00.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:00.908229 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 04:09:00.909918 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 04:09:00.928728 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 04:09:00.932400 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 04:09:00.970817 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 04:09:00.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:00.981338 ignition[1018]: INFO : Ignition 2.24.0 Dec 16 04:09:00.981338 ignition[1018]: INFO : Stage: mount Dec 16 04:09:00.983253 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 04:09:00.983253 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 04:09:00.983253 ignition[1018]: INFO : mount: mount passed Dec 16 04:09:00.983253 ignition[1018]: INFO : Ignition finished successfully Dec 16 04:09:00.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:00.984190 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 04:09:01.795403 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 04:09:03.808428 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 04:09:07.818424 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 04:09:07.825453 coreos-metadata[917]: Dec 16 04:09:07.825 WARN failed to locate config-drive, using the metadata service API instead Dec 16 04:09:07.850855 coreos-metadata[917]: Dec 16 04:09:07.850 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 04:09:07.867176 coreos-metadata[917]: Dec 16 04:09:07.867 INFO Fetch successful Dec 16 04:09:07.872288 coreos-metadata[917]: Dec 16 04:09:07.869 INFO wrote hostname srv-cuii1.gb1.brightbox.com to /sysroot/etc/hostname Dec 16 04:09:07.875087 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 16 04:09:07.890278 kernel: kauditd_printk_skb: 5 callbacks suppressed Dec 16 04:09:07.890317 kernel: audit: type=1130 audit(1765858147.877:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:07.890339 kernel: audit: type=1131 audit(1765858147.877:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:07.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:07.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:07.875493 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 16 04:09:07.881005 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 04:09:07.903191 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 04:09:08.107432 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1034) Dec 16 04:09:08.112420 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 04:09:08.112497 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 04:09:08.119279 kernel: BTRFS info (device vda6): turning on async discard Dec 16 04:09:08.119355 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 04:09:08.122967 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 04:09:08.165373 ignition[1052]: INFO : Ignition 2.24.0 Dec 16 04:09:08.165373 ignition[1052]: INFO : Stage: files Dec 16 04:09:08.167808 ignition[1052]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 04:09:08.167808 ignition[1052]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 04:09:08.167808 ignition[1052]: DEBUG : files: compiled without relabeling support, skipping Dec 16 04:09:08.189978 ignition[1052]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 04:09:08.189978 ignition[1052]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 04:09:08.227407 ignition[1052]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 04:09:08.228819 ignition[1052]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 04:09:08.232050 unknown[1052]: wrote ssh authorized keys file for user: core Dec 16 04:09:08.233116 ignition[1052]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 04:09:08.251932 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 04:09:08.253334 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 04:09:08.438060 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 04:09:08.940897 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 04:09:08.942348 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 04:09:08.942348 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 04:09:08.942348 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 04:09:08.942348 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 04:09:08.942348 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 04:09:08.942348 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 04:09:08.942348 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 04:09:08.942348 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 04:09:08.953837 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 04:09:08.955005 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 04:09:08.955005 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 04:09:08.957531 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 04:09:08.957531 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 04:09:08.957531 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 04:09:09.300792 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 04:09:11.035099 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 04:09:11.035099 ignition[1052]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 04:09:11.039057 ignition[1052]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 04:09:11.047694 ignition[1052]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 04:09:11.047694 ignition[1052]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 04:09:11.047694 ignition[1052]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 04:09:11.053507 ignition[1052]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 04:09:11.053507 ignition[1052]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 04:09:11.053507 ignition[1052]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 04:09:11.053507 ignition[1052]: INFO : files: files passed Dec 16 04:09:11.053507 ignition[1052]: INFO : Ignition finished successfully Dec 16 04:09:11.065598 kernel: audit: type=1130 audit(1765858151.058:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.055203 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 04:09:11.063687 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 04:09:11.067260 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 04:09:11.087774 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 04:09:11.088580 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 04:09:11.101207 kernel: audit: type=1130 audit(1765858151.088:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.101287 kernel: audit: type=1131 audit(1765858151.089:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.102663 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 04:09:11.103969 initrd-setup-root-after-ignition[1084]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 04:09:11.105676 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 04:09:11.108206 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 04:09:11.114888 kernel: audit: type=1130 audit(1765858151.108:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.109892 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 04:09:11.116928 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 04:09:11.176158 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 04:09:11.176351 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 04:09:11.188087 kernel: audit: type=1130 audit(1765858151.177:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.188125 kernel: audit: type=1131 audit(1765858151.177:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.178674 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 04:09:11.188730 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 04:09:11.190534 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 04:09:11.192024 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 04:09:11.226204 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 04:09:11.233398 kernel: audit: type=1130 audit(1765858151.226:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.229597 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 04:09:11.252780 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 04:09:11.253268 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 04:09:11.254193 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 04:09:11.255822 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 04:09:11.264239 kernel: audit: type=1131 audit(1765858151.258:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.257443 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 04:09:11.257709 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 04:09:11.264135 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 04:09:11.265060 systemd[1]: Stopped target basic.target - Basic System. Dec 16 04:09:11.266360 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 04:09:11.267761 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 04:09:11.269200 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 04:09:11.270543 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 04:09:11.272133 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 04:09:11.273471 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 04:09:11.275171 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 04:09:11.276462 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 04:09:11.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.277971 systemd[1]: Stopped target swap.target - Swaps. Dec 16 04:09:11.279327 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 04:09:11.279560 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 04:09:11.281320 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 04:09:11.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.282288 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 04:09:11.283536 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 04:09:11.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.283808 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 04:09:11.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.285239 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 04:09:11.285454 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 04:09:11.287332 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 04:09:11.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.287631 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 04:09:11.289372 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 04:09:11.289625 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 04:09:11.291845 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 04:09:11.294171 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 04:09:11.294393 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 04:09:11.312687 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 04:09:11.314238 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 04:09:11.315343 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 04:09:11.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.319475 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 04:09:11.320499 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 04:09:11.322222 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 04:09:11.322396 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 04:09:11.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.333590 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 04:09:11.336441 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 04:09:11.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.341714 ignition[1108]: INFO : Ignition 2.24.0 Dec 16 04:09:11.342796 ignition[1108]: INFO : Stage: umount Dec 16 04:09:11.344461 ignition[1108]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 04:09:11.344461 ignition[1108]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 04:09:11.346976 ignition[1108]: INFO : umount: umount passed Dec 16 04:09:11.346976 ignition[1108]: INFO : Ignition finished successfully Dec 16 04:09:11.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.346575 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 04:09:11.346755 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 04:09:11.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.349073 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 04:09:11.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.349229 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 04:09:11.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.350654 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 04:09:11.350725 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 04:09:11.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.352097 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 04:09:11.352191 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 04:09:11.353413 systemd[1]: Stopped target network.target - Network. Dec 16 04:09:11.354624 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 04:09:11.354703 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 04:09:11.356078 systemd[1]: Stopped target paths.target - Path Units. Dec 16 04:09:11.357345 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 04:09:11.361470 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 04:09:11.362509 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 04:09:11.363132 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 04:09:11.365963 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 04:09:11.366031 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 04:09:11.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.367011 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 04:09:11.367077 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 04:09:11.367761 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 04:09:11.367811 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 04:09:11.370619 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 04:09:11.370718 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 04:09:11.372856 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 04:09:11.372960 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 04:09:11.374108 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 04:09:11.374852 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 04:09:11.386181 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 04:09:11.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.387111 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 04:09:11.393311 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 04:09:11.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.394054 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 04:09:11.394234 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 04:09:11.396000 audit: BPF prog-id=6 op=UNLOAD Dec 16 04:09:11.398000 audit: BPF prog-id=9 op=UNLOAD Dec 16 04:09:11.397805 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 04:09:11.403174 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 04:09:11.403259 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 04:09:11.405889 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 04:09:11.407339 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 04:09:11.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.407441 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 04:09:11.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.410614 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 04:09:11.410704 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 04:09:11.413475 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 04:09:11.413556 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 04:09:11.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.415968 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 04:09:11.418042 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 04:09:11.421093 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 04:09:11.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.423356 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 04:09:11.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.423515 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 04:09:11.427346 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 04:09:11.427654 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 04:09:11.428801 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 04:09:11.428870 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 04:09:11.435552 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 04:09:11.435615 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 04:09:11.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.436600 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 04:09:11.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.436677 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 04:09:11.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.438768 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 04:09:11.438837 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 04:09:11.440063 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 04:09:11.440136 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 04:09:11.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.442710 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 04:09:11.444717 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 04:09:11.444801 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 04:09:11.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.446819 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 04:09:11.446910 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 04:09:11.447673 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 04:09:11.447743 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 04:09:11.465684 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 04:09:11.467009 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 04:09:11.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.469293 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 04:09:11.469517 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 04:09:11.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:11.471394 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 04:09:11.474211 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 04:09:11.501022 systemd[1]: Switching root. Dec 16 04:09:11.555735 systemd-journald[328]: Journal stopped Dec 16 04:09:13.017473 systemd-journald[328]: Received SIGTERM from PID 1 (systemd). Dec 16 04:09:13.017574 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 04:09:13.017601 kernel: SELinux: policy capability open_perms=1 Dec 16 04:09:13.017622 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 04:09:13.017648 kernel: SELinux: policy capability always_check_network=0 Dec 16 04:09:13.017674 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 04:09:13.017695 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 04:09:13.017742 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 04:09:13.017764 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 04:09:13.017793 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 04:09:13.017815 systemd[1]: Successfully loaded SELinux policy in 71.218ms. Dec 16 04:09:13.017844 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.389ms. Dec 16 04:09:13.017867 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 04:09:13.017903 systemd[1]: Detected virtualization kvm. Dec 16 04:09:13.017943 systemd[1]: Detected architecture x86-64. Dec 16 04:09:13.017967 systemd[1]: Detected first boot. Dec 16 04:09:13.017989 systemd[1]: Hostname set to . Dec 16 04:09:13.018016 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 04:09:13.018039 zram_generator::config[1151]: No configuration found. Dec 16 04:09:13.018061 kernel: Guest personality initialized and is inactive Dec 16 04:09:13.018096 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 04:09:13.018119 kernel: Initialized host personality Dec 16 04:09:13.018138 kernel: NET: Registered PF_VSOCK protocol family Dec 16 04:09:13.018160 systemd[1]: Populated /etc with preset unit settings. Dec 16 04:09:13.018182 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 04:09:13.018203 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 04:09:13.018224 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 04:09:13.018281 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 04:09:13.018317 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 04:09:13.018338 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 04:09:13.018357 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 04:09:13.018378 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 04:09:13.019448 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 04:09:13.019486 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 04:09:13.019542 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 04:09:13.019568 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 04:09:13.019590 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 04:09:13.019612 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 04:09:13.019632 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 04:09:13.019654 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 04:09:13.019691 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 04:09:13.019728 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 04:09:13.019750 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 04:09:13.019770 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 04:09:13.019791 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 04:09:13.019812 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 04:09:13.019848 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 04:09:13.019895 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 04:09:13.019918 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 04:09:13.019939 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 04:09:13.019961 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 04:09:13.019982 systemd[1]: Reached target slices.target - Slice Units. Dec 16 04:09:13.020003 systemd[1]: Reached target swap.target - Swaps. Dec 16 04:09:13.020043 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 04:09:13.020066 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 04:09:13.020087 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 04:09:13.020108 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 04:09:13.020128 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 04:09:13.020149 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 04:09:13.020170 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 04:09:13.020206 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 04:09:13.020231 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 04:09:13.020252 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 04:09:13.020273 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 04:09:13.020294 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 04:09:13.020321 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 04:09:13.020342 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 04:09:13.023162 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 04:09:13.023199 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 04:09:13.023223 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 04:09:13.023244 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 04:09:13.023266 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 04:09:13.023289 systemd[1]: Reached target machines.target - Containers. Dec 16 04:09:13.023310 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 04:09:13.023356 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 04:09:13.023430 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 04:09:13.023469 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 04:09:13.023491 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 04:09:13.023512 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 04:09:13.023545 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 04:09:13.023596 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 04:09:13.023635 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 04:09:13.023659 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 04:09:13.023680 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 04:09:13.023702 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 04:09:13.023737 kernel: kauditd_printk_skb: 53 callbacks suppressed Dec 16 04:09:13.023761 kernel: audit: type=1131 audit(1765858152.888:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.023783 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 04:09:13.023804 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 04:09:13.023826 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 04:09:13.023865 kernel: audit: type=1131 audit(1765858152.898:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.023902 kernel: audit: type=1334 audit(1765858152.910:103): prog-id=14 op=UNLOAD Dec 16 04:09:13.023923 kernel: audit: type=1334 audit(1765858152.910:104): prog-id=13 op=UNLOAD Dec 16 04:09:13.023942 kernel: audit: type=1334 audit(1765858152.916:105): prog-id=15 op=LOAD Dec 16 04:09:13.023961 kernel: audit: type=1334 audit(1765858152.916:106): prog-id=16 op=LOAD Dec 16 04:09:13.023982 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 04:09:13.024003 kernel: audit: type=1334 audit(1765858152.916:107): prog-id=17 op=LOAD Dec 16 04:09:13.024038 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 04:09:13.024063 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 04:09:13.024085 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 04:09:13.024106 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 04:09:13.024127 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 04:09:13.024148 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 04:09:13.024169 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 04:09:13.024205 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 04:09:13.024229 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 04:09:13.024250 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 04:09:13.024272 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 04:09:13.024308 kernel: fuse: init (API version 7.41) Dec 16 04:09:13.024346 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 04:09:13.024370 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 04:09:13.024409 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 04:09:13.024432 kernel: audit: type=1130 audit(1765858153.001:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.024486 systemd-journald[1240]: Collecting audit messages is enabled. Dec 16 04:09:13.024548 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 04:09:13.024574 kernel: audit: type=1305 audit(1765858153.010:109): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 04:09:13.024595 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 04:09:13.024618 systemd-journald[1240]: Journal started Dec 16 04:09:13.024673 systemd-journald[1240]: Runtime Journal (/run/log/journal/d9363c7681e4421dabf0b9665370cd35) is 4.7M, max 37.7M, 33M free. Dec 16 04:09:13.026770 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 04:09:12.758000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 04:09:12.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:12.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.031488 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 04:09:13.031528 kernel: audit: type=1300 audit(1765858153.010:109): arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff831042e0 a2=4000 a3=0 items=0 ppid=1 pid=1240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:12.910000 audit: BPF prog-id=14 op=UNLOAD Dec 16 04:09:12.910000 audit: BPF prog-id=13 op=UNLOAD Dec 16 04:09:12.916000 audit: BPF prog-id=15 op=LOAD Dec 16 04:09:12.916000 audit: BPF prog-id=16 op=LOAD Dec 16 04:09:12.916000 audit: BPF prog-id=17 op=LOAD Dec 16 04:09:13.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.010000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 04:09:13.010000 audit[1240]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff831042e0 a2=4000 a3=0 items=0 ppid=1 pid=1240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:12.654725 systemd[1]: Queued start job for default target multi-user.target. Dec 16 04:09:12.665562 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 04:09:12.666369 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 04:09:13.037828 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 04:09:13.038111 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 04:09:13.042002 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 04:09:13.042448 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 04:09:13.043958 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 04:09:13.044230 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 04:09:13.010000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 04:09:13.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.048526 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 04:09:13.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.069359 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 04:09:13.077581 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 04:09:13.084498 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 04:09:13.085298 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 04:09:13.085345 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 04:09:13.092935 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 04:09:13.099417 kernel: ACPI: bus type drm_connector registered Dec 16 04:09:13.101501 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 04:09:13.101694 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 04:09:13.109696 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 04:09:13.116964 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 04:09:13.117859 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 04:09:13.124248 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 04:09:13.125189 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 04:09:13.128740 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 04:09:13.133014 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 04:09:13.134506 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 04:09:13.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.137739 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 04:09:13.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.139518 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 04:09:13.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.143722 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 04:09:13.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.146539 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 04:09:13.148075 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 04:09:13.156429 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 04:09:13.166335 systemd-journald[1240]: Time spent on flushing to /var/log/journal/d9363c7681e4421dabf0b9665370cd35 is 87.754ms for 1298 entries. Dec 16 04:09:13.166335 systemd-journald[1240]: System Journal (/var/log/journal/d9363c7681e4421dabf0b9665370cd35) is 8M, max 588.1M, 580.1M free. Dec 16 04:09:13.265641 systemd-journald[1240]: Received client request to flush runtime journal. Dec 16 04:09:13.265697 kernel: loop1: detected capacity change from 0 to 224512 Dec 16 04:09:13.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.166663 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 04:09:13.171590 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 04:09:13.177882 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 04:09:13.210031 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 04:09:13.213553 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 04:09:13.219744 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 04:09:13.270942 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 04:09:13.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.296836 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 04:09:13.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.298691 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 04:09:13.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.312408 kernel: loop2: detected capacity change from 0 to 8 Dec 16 04:09:13.328975 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 04:09:13.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.332000 audit: BPF prog-id=18 op=LOAD Dec 16 04:09:13.332000 audit: BPF prog-id=19 op=LOAD Dec 16 04:09:13.333000 audit: BPF prog-id=20 op=LOAD Dec 16 04:09:13.337395 kernel: loop3: detected capacity change from 0 to 111560 Dec 16 04:09:13.336604 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 04:09:13.339000 audit: BPF prog-id=21 op=LOAD Dec 16 04:09:13.343518 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 04:09:13.348686 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 04:09:13.366000 audit: BPF prog-id=22 op=LOAD Dec 16 04:09:13.367000 audit: BPF prog-id=23 op=LOAD Dec 16 04:09:13.367000 audit: BPF prog-id=24 op=LOAD Dec 16 04:09:13.370903 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 04:09:13.376000 audit: BPF prog-id=25 op=LOAD Dec 16 04:09:13.376000 audit: BPF prog-id=26 op=LOAD Dec 16 04:09:13.377000 audit: BPF prog-id=27 op=LOAD Dec 16 04:09:13.381602 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 04:09:13.392434 kernel: loop4: detected capacity change from 0 to 50784 Dec 16 04:09:13.442516 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Dec 16 04:09:13.442558 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Dec 16 04:09:13.462418 kernel: loop5: detected capacity change from 0 to 224512 Dec 16 04:09:13.465860 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 04:09:13.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.471693 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 04:09:13.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.483406 kernel: loop6: detected capacity change from 0 to 8 Dec 16 04:09:13.491823 kernel: loop7: detected capacity change from 0 to 111560 Dec 16 04:09:13.498364 systemd-nsresourced[1311]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 04:09:13.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.504258 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 04:09:13.510414 kernel: loop1: detected capacity change from 0 to 50784 Dec 16 04:09:13.525305 (sd-merge)[1314]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Dec 16 04:09:13.532264 (sd-merge)[1314]: Merged extensions into '/usr'. Dec 16 04:09:13.541264 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 04:09:13.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:13.542875 systemd[1]: Reload requested from client PID 1283 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 04:09:13.542903 systemd[1]: Reloading... Dec 16 04:09:13.687553 systemd-oomd[1306]: No swap; memory pressure usage will be degraded Dec 16 04:09:13.727438 zram_generator::config[1360]: No configuration found. Dec 16 04:09:13.727793 systemd-resolved[1307]: Positive Trust Anchors: Dec 16 04:09:13.727813 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 04:09:13.727821 systemd-resolved[1307]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 04:09:13.727874 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 04:09:13.759525 systemd-resolved[1307]: Using system hostname 'srv-cuii1.gb1.brightbox.com'. Dec 16 04:09:14.026214 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 04:09:14.027235 systemd[1]: Reloading finished in 483 ms. Dec 16 04:09:14.052721 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 04:09:14.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.054627 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 04:09:14.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.055983 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 04:09:14.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.061310 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 04:09:14.068624 systemd[1]: Starting ensure-sysext.service... Dec 16 04:09:14.075591 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 04:09:14.079000 audit: BPF prog-id=28 op=LOAD Dec 16 04:09:14.081000 audit: BPF prog-id=21 op=UNLOAD Dec 16 04:09:14.081000 audit: BPF prog-id=29 op=LOAD Dec 16 04:09:14.081000 audit: BPF prog-id=18 op=UNLOAD Dec 16 04:09:14.082000 audit: BPF prog-id=30 op=LOAD Dec 16 04:09:14.082000 audit: BPF prog-id=31 op=LOAD Dec 16 04:09:14.082000 audit: BPF prog-id=19 op=UNLOAD Dec 16 04:09:14.082000 audit: BPF prog-id=20 op=UNLOAD Dec 16 04:09:14.084000 audit: BPF prog-id=32 op=LOAD Dec 16 04:09:14.084000 audit: BPF prog-id=15 op=UNLOAD Dec 16 04:09:14.084000 audit: BPF prog-id=33 op=LOAD Dec 16 04:09:14.084000 audit: BPF prog-id=34 op=LOAD Dec 16 04:09:14.085000 audit: BPF prog-id=16 op=UNLOAD Dec 16 04:09:14.085000 audit: BPF prog-id=17 op=UNLOAD Dec 16 04:09:14.087000 audit: BPF prog-id=35 op=LOAD Dec 16 04:09:14.091000 audit: BPF prog-id=22 op=UNLOAD Dec 16 04:09:14.091000 audit: BPF prog-id=36 op=LOAD Dec 16 04:09:14.091000 audit: BPF prog-id=37 op=LOAD Dec 16 04:09:14.091000 audit: BPF prog-id=23 op=UNLOAD Dec 16 04:09:14.091000 audit: BPF prog-id=24 op=UNLOAD Dec 16 04:09:14.092000 audit: BPF prog-id=38 op=LOAD Dec 16 04:09:14.092000 audit: BPF prog-id=25 op=UNLOAD Dec 16 04:09:14.092000 audit: BPF prog-id=39 op=LOAD Dec 16 04:09:14.093000 audit: BPF prog-id=40 op=LOAD Dec 16 04:09:14.093000 audit: BPF prog-id=26 op=UNLOAD Dec 16 04:09:14.093000 audit: BPF prog-id=27 op=UNLOAD Dec 16 04:09:14.112026 systemd[1]: Reload requested from client PID 1413 ('systemctl') (unit ensure-sysext.service)... Dec 16 04:09:14.112056 systemd[1]: Reloading... Dec 16 04:09:14.137056 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 04:09:14.137616 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 04:09:14.138225 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 04:09:14.146476 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Dec 16 04:09:14.147541 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Dec 16 04:09:14.163937 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 04:09:14.164430 systemd-tmpfiles[1414]: Skipping /boot Dec 16 04:09:14.198990 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 04:09:14.199535 systemd-tmpfiles[1414]: Skipping /boot Dec 16 04:09:14.285411 zram_generator::config[1446]: No configuration found. Dec 16 04:09:14.573679 systemd[1]: Reloading finished in 461 ms. Dec 16 04:09:14.603096 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 04:09:14.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.606000 audit: BPF prog-id=41 op=LOAD Dec 16 04:09:14.606000 audit: BPF prog-id=29 op=UNLOAD Dec 16 04:09:14.606000 audit: BPF prog-id=42 op=LOAD Dec 16 04:09:14.606000 audit: BPF prog-id=43 op=LOAD Dec 16 04:09:14.606000 audit: BPF prog-id=30 op=UNLOAD Dec 16 04:09:14.606000 audit: BPF prog-id=31 op=UNLOAD Dec 16 04:09:14.608000 audit: BPF prog-id=44 op=LOAD Dec 16 04:09:14.608000 audit: BPF prog-id=28 op=UNLOAD Dec 16 04:09:14.610000 audit: BPF prog-id=45 op=LOAD Dec 16 04:09:14.610000 audit: BPF prog-id=32 op=UNLOAD Dec 16 04:09:14.610000 audit: BPF prog-id=46 op=LOAD Dec 16 04:09:14.610000 audit: BPF prog-id=47 op=LOAD Dec 16 04:09:14.610000 audit: BPF prog-id=33 op=UNLOAD Dec 16 04:09:14.611000 audit: BPF prog-id=34 op=UNLOAD Dec 16 04:09:14.611000 audit: BPF prog-id=48 op=LOAD Dec 16 04:09:14.611000 audit: BPF prog-id=35 op=UNLOAD Dec 16 04:09:14.611000 audit: BPF prog-id=49 op=LOAD Dec 16 04:09:14.611000 audit: BPF prog-id=50 op=LOAD Dec 16 04:09:14.612000 audit: BPF prog-id=36 op=UNLOAD Dec 16 04:09:14.612000 audit: BPF prog-id=37 op=UNLOAD Dec 16 04:09:14.619000 audit: BPF prog-id=51 op=LOAD Dec 16 04:09:14.619000 audit: BPF prog-id=38 op=UNLOAD Dec 16 04:09:14.619000 audit: BPF prog-id=52 op=LOAD Dec 16 04:09:14.619000 audit: BPF prog-id=53 op=LOAD Dec 16 04:09:14.619000 audit: BPF prog-id=39 op=UNLOAD Dec 16 04:09:14.619000 audit: BPF prog-id=40 op=UNLOAD Dec 16 04:09:14.624222 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 04:09:14.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.636989 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 04:09:14.638988 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 04:09:14.642708 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 04:09:14.643701 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 04:09:14.651826 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 04:09:14.655783 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 04:09:14.658780 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 04:09:14.661688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 04:09:14.661993 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 04:09:14.665532 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 04:09:14.666279 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 04:09:14.670243 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 04:09:14.670000 audit: BPF prog-id=8 op=UNLOAD Dec 16 04:09:14.671000 audit: BPF prog-id=7 op=UNLOAD Dec 16 04:09:14.671000 audit: BPF prog-id=54 op=LOAD Dec 16 04:09:14.672000 audit: BPF prog-id=55 op=LOAD Dec 16 04:09:14.675189 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 04:09:14.687817 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 04:09:14.689427 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 04:09:14.692148 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 04:09:14.693340 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 04:09:14.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.700836 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 04:09:14.701122 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 04:09:14.705114 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 04:09:14.706012 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 04:09:14.706261 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 04:09:14.707481 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 04:09:14.707640 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 04:09:14.716009 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 04:09:14.716338 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 04:09:14.718117 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 04:09:14.719618 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 04:09:14.719890 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 04:09:14.720040 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 04:09:14.720214 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 04:09:14.727271 systemd[1]: Finished ensure-sysext.service. Dec 16 04:09:14.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.739000 audit: BPF prog-id=56 op=LOAD Dec 16 04:09:14.746641 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 04:09:14.765946 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 04:09:14.766325 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 04:09:14.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.772804 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 04:09:14.772000 audit[1518]: SYSTEM_BOOT pid=1518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.773462 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 04:09:14.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.782505 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 04:09:14.791898 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 04:09:14.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.820199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 04:09:14.821789 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 04:09:14.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.824001 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 04:09:14.829419 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 04:09:14.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.831340 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 04:09:14.832758 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 04:09:14.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:14.855000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 04:09:14.855000 audit[1545]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe926d0280 a2=420 a3=0 items=0 ppid=1504 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:14.855000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 04:09:14.856009 augenrules[1545]: No rules Dec 16 04:09:14.857827 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 04:09:14.858851 systemd-udevd[1517]: Using default interface naming scheme 'v257'. Dec 16 04:09:14.859715 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 04:09:14.885978 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 04:09:14.887092 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 04:09:14.921497 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 04:09:14.929689 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 04:09:14.996849 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 04:09:14.998393 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 04:09:15.108552 systemd-networkd[1555]: lo: Link UP Dec 16 04:09:15.108567 systemd-networkd[1555]: lo: Gained carrier Dec 16 04:09:15.112061 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 04:09:15.114631 systemd[1]: Reached target network.target - Network. Dec 16 04:09:15.118671 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 04:09:15.123007 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 04:09:15.183075 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 04:09:15.322314 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 04:09:15.382895 systemd-networkd[1555]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 04:09:15.382911 systemd-networkd[1555]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 04:09:15.385618 systemd-networkd[1555]: eth0: Link UP Dec 16 04:09:15.385889 systemd-networkd[1555]: eth0: Gained carrier Dec 16 04:09:15.385911 systemd-networkd[1555]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 04:09:15.403497 systemd-networkd[1555]: eth0: DHCPv4 address 10.230.69.46/30, gateway 10.230.69.45 acquired from 10.230.69.45 Dec 16 04:09:15.406340 systemd-timesyncd[1526]: Network configuration changed, trying to establish connection. Dec 16 04:09:15.447415 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 04:09:15.464520 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 04:09:15.480653 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 04:09:15.537350 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 04:09:15.550054 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 04:09:15.565707 ldconfig[1510]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 04:09:15.568400 kernel: ACPI: button: Power Button [PWRF] Dec 16 04:09:15.572576 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 04:09:15.579715 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 04:09:15.594401 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 04:09:15.597454 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 04:09:15.614081 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 04:09:15.616239 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 04:09:15.618275 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 04:09:15.619546 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 04:09:15.620351 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 04:09:15.621353 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 04:09:15.624062 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 04:09:15.625061 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 04:09:15.626469 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 04:09:15.627362 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 04:09:15.628454 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 04:09:15.628499 systemd[1]: Reached target paths.target - Path Units. Dec 16 04:09:15.629463 systemd[1]: Reached target timers.target - Timer Units. Dec 16 04:09:15.631749 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 04:09:15.636102 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 04:09:15.642290 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 04:09:15.643289 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 04:09:15.644026 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 04:09:15.654178 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 04:09:15.656973 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 04:09:15.658624 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 04:09:15.661444 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 04:09:15.662454 systemd[1]: Reached target basic.target - Basic System. Dec 16 04:09:15.663528 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 04:09:15.663579 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 04:09:15.667742 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 04:09:15.671427 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 04:09:15.675741 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 04:09:15.698195 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 04:09:15.704365 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 04:09:15.710695 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 04:09:15.712475 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 04:09:15.723405 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 04:09:15.724792 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 04:09:15.736767 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 04:09:15.751028 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 04:09:15.758316 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 04:09:15.770749 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 04:09:15.784524 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 04:09:15.786013 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 04:09:15.786716 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 04:09:15.796837 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 04:09:15.804607 jq[1603]: false Dec 16 04:09:15.805220 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 04:09:15.814154 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 04:09:15.815344 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 04:09:15.816427 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 04:09:15.820146 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 04:09:15.821564 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 04:09:15.830819 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Refreshing passwd entry cache Dec 16 04:09:15.831371 oslogin_cache_refresh[1605]: Refreshing passwd entry cache Dec 16 04:09:15.871957 dbus-daemon[1600]: [system] SELinux support is enabled Dec 16 04:09:15.875560 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 04:09:15.881549 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 04:09:15.881606 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 04:09:15.887300 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 04:09:15.887330 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 04:09:15.888526 dbus-daemon[1600]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1555 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 04:09:15.908882 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 04:09:15.915949 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Failure getting users, quitting Dec 16 04:09:15.915949 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 04:09:15.915949 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Refreshing group entry cache Dec 16 04:09:15.915219 oslogin_cache_refresh[1605]: Failure getting users, quitting Dec 16 04:09:15.915245 oslogin_cache_refresh[1605]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 04:09:15.915359 oslogin_cache_refresh[1605]: Refreshing group entry cache Dec 16 04:09:15.920431 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Failure getting groups, quitting Dec 16 04:09:15.920431 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 04:09:15.920532 jq[1615]: true Dec 16 04:09:15.918967 oslogin_cache_refresh[1605]: Failure getting groups, quitting Dec 16 04:09:15.918982 oslogin_cache_refresh[1605]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 04:09:15.924456 tar[1620]: linux-amd64/LICENSE Dec 16 04:09:15.924456 tar[1620]: linux-amd64/helm Dec 16 04:09:15.927372 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 04:09:15.931131 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 04:09:15.959341 extend-filesystems[1604]: Found /dev/vda6 Dec 16 04:09:15.965577 update_engine[1614]: I20251216 04:09:15.964109 1614 main.cc:92] Flatcar Update Engine starting Dec 16 04:09:15.971428 extend-filesystems[1604]: Found /dev/vda9 Dec 16 04:09:15.971248 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 04:09:15.974957 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 04:09:15.981252 extend-filesystems[1604]: Checking size of /dev/vda9 Dec 16 04:09:15.985113 systemd[1]: Started update-engine.service - Update Engine. Dec 16 04:09:15.990480 update_engine[1614]: I20251216 04:09:15.986073 1614 update_check_scheduler.cc:74] Next update check in 5m27s Dec 16 04:09:15.999934 jq[1642]: true Dec 16 04:09:16.047108 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 04:09:16.063234 extend-filesystems[1604]: Resized partition /dev/vda9 Dec 16 04:09:16.070174 extend-filesystems[1660]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 04:09:16.086402 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Dec 16 04:09:16.145878 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 04:09:16.307489 bash[1676]: Updated "/home/core/.ssh/authorized_keys" Dec 16 04:09:16.307113 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 04:09:16.318886 systemd[1]: Starting sshkeys.service... Dec 16 04:09:16.321421 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Dec 16 04:09:16.347712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 04:09:16.377353 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 04:09:16.391573 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 04:09:16.400210 extend-filesystems[1660]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 04:09:16.400210 extend-filesystems[1660]: old_desc_blocks = 1, new_desc_blocks = 7 Dec 16 04:09:16.400210 extend-filesystems[1660]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Dec 16 04:09:16.411357 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 04:09:16.428736 extend-filesystems[1604]: Resized filesystem in /dev/vda9 Dec 16 04:09:16.434639 sshd_keygen[1639]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 04:09:16.412349 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 04:09:16.447412 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 04:09:16.572509 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 04:09:16.583575 systemd-networkd[1555]: eth0: Gained IPv6LL Dec 16 04:09:16.584177 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 04:09:16.584446 systemd-timesyncd[1526]: Network configuration changed, trying to establish connection. Dec 16 04:09:16.602913 systemd[1]: Started sshd@0-10.230.69.46:22-139.178.89.65:49908.service - OpenSSH per-connection server daemon (139.178.89.65:49908). Dec 16 04:09:16.606799 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 04:09:16.614666 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 04:09:16.625853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 04:09:16.635349 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 04:09:16.693656 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 04:09:16.697484 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 04:09:16.705482 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 04:09:16.758617 containerd[1631]: time="2025-12-16T04:09:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 04:09:16.772966 containerd[1631]: time="2025-12-16T04:09:16.771788185Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 04:09:16.843718 systemd-logind[1613]: Watching system buttons on /dev/input/event3 (Power Button) Dec 16 04:09:16.843762 systemd-logind[1613]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 04:09:16.848953 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 04:09:16.879935 containerd[1631]: time="2025-12-16T04:09:16.879181805Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.62µs" Dec 16 04:09:16.879935 containerd[1631]: time="2025-12-16T04:09:16.879230906Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 04:09:16.879935 containerd[1631]: time="2025-12-16T04:09:16.879298221Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 04:09:16.879935 containerd[1631]: time="2025-12-16T04:09:16.879318669Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 04:09:16.889410 containerd[1631]: time="2025-12-16T04:09:16.885599968Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 04:09:16.889410 containerd[1631]: time="2025-12-16T04:09:16.885675058Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 04:09:16.889410 containerd[1631]: time="2025-12-16T04:09:16.886470050Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 04:09:16.889410 containerd[1631]: time="2025-12-16T04:09:16.888417537Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 04:09:16.889591 containerd[1631]: time="2025-12-16T04:09:16.889463023Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 04:09:16.889591 containerd[1631]: time="2025-12-16T04:09:16.889488939Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 04:09:16.889591 containerd[1631]: time="2025-12-16T04:09:16.889527704Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 04:09:16.889591 containerd[1631]: time="2025-12-16T04:09:16.889557492Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 04:09:16.893396 containerd[1631]: time="2025-12-16T04:09:16.892064136Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 04:09:16.895448 containerd[1631]: time="2025-12-16T04:09:16.892093814Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 04:09:16.895701 containerd[1631]: time="2025-12-16T04:09:16.895673058Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 04:09:16.896665 containerd[1631]: time="2025-12-16T04:09:16.896633235Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 04:09:16.900540 containerd[1631]: time="2025-12-16T04:09:16.900503725Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 04:09:16.900599 containerd[1631]: time="2025-12-16T04:09:16.900556356Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 04:09:16.902166 containerd[1631]: time="2025-12-16T04:09:16.900655382Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 04:09:16.904239 containerd[1631]: time="2025-12-16T04:09:16.904192610Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 04:09:16.904425 containerd[1631]: time="2025-12-16T04:09:16.904357687Z" level=info msg="metadata content store policy set" policy=shared Dec 16 04:09:16.914888 containerd[1631]: time="2025-12-16T04:09:16.914712360Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 04:09:16.914888 containerd[1631]: time="2025-12-16T04:09:16.914773765Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 04:09:16.914888 containerd[1631]: time="2025-12-16T04:09:16.914876729Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 04:09:16.915035 containerd[1631]: time="2025-12-16T04:09:16.914897856Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 04:09:16.915035 containerd[1631]: time="2025-12-16T04:09:16.914917613Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 04:09:16.915035 containerd[1631]: time="2025-12-16T04:09:16.914940321Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 04:09:16.915035 containerd[1631]: time="2025-12-16T04:09:16.914968328Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 04:09:16.915035 containerd[1631]: time="2025-12-16T04:09:16.914988249Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 04:09:16.915035 containerd[1631]: time="2025-12-16T04:09:16.915022323Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 04:09:16.915207 containerd[1631]: time="2025-12-16T04:09:16.915043396Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 04:09:16.915207 containerd[1631]: time="2025-12-16T04:09:16.915061918Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 04:09:16.915207 containerd[1631]: time="2025-12-16T04:09:16.915087767Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 04:09:16.915207 containerd[1631]: time="2025-12-16T04:09:16.915106014Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 04:09:16.915207 containerd[1631]: time="2025-12-16T04:09:16.915125276Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 04:09:16.915364 containerd[1631]: time="2025-12-16T04:09:16.915280326Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 04:09:16.915364 containerd[1631]: time="2025-12-16T04:09:16.915327717Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 04:09:16.915479 containerd[1631]: time="2025-12-16T04:09:16.915373330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 04:09:16.915479 containerd[1631]: time="2025-12-16T04:09:16.915426627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 04:09:16.915581 containerd[1631]: time="2025-12-16T04:09:16.915488506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 04:09:16.915581 containerd[1631]: time="2025-12-16T04:09:16.915506730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 04:09:16.915581 containerd[1631]: time="2025-12-16T04:09:16.915525119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 04:09:16.915581 containerd[1631]: time="2025-12-16T04:09:16.915540712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 04:09:16.915581 containerd[1631]: time="2025-12-16T04:09:16.915563233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 04:09:16.915581 containerd[1631]: time="2025-12-16T04:09:16.915578042Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 04:09:16.920488 containerd[1631]: time="2025-12-16T04:09:16.915592542Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 04:09:16.920488 containerd[1631]: time="2025-12-16T04:09:16.915662079Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 04:09:16.920488 containerd[1631]: time="2025-12-16T04:09:16.915718962Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 04:09:16.920488 containerd[1631]: time="2025-12-16T04:09:16.915740135Z" level=info msg="Start snapshots syncer" Dec 16 04:09:16.920488 containerd[1631]: time="2025-12-16T04:09:16.915799028Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 04:09:16.920674 containerd[1631]: time="2025-12-16T04:09:16.916159772Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 04:09:16.920674 containerd[1631]: time="2025-12-16T04:09:16.916236731Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 04:09:16.920942 containerd[1631]: time="2025-12-16T04:09:16.916298519Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 04:09:16.920942 containerd[1631]: time="2025-12-16T04:09:16.920497999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 04:09:16.920942 containerd[1631]: time="2025-12-16T04:09:16.920558578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 04:09:16.920942 containerd[1631]: time="2025-12-16T04:09:16.920581651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 04:09:16.920942 containerd[1631]: time="2025-12-16T04:09:16.920598769Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 04:09:16.920942 containerd[1631]: time="2025-12-16T04:09:16.920616487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 04:09:16.920942 containerd[1631]: time="2025-12-16T04:09:16.920658773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 04:09:16.920942 containerd[1631]: time="2025-12-16T04:09:16.920678839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 04:09:16.920942 containerd[1631]: time="2025-12-16T04:09:16.920696498Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 04:09:16.920942 containerd[1631]: time="2025-12-16T04:09:16.920721811Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 04:09:16.920942 containerd[1631]: time="2025-12-16T04:09:16.920810546Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 04:09:16.920942 containerd[1631]: time="2025-12-16T04:09:16.920836488Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 04:09:16.920942 containerd[1631]: time="2025-12-16T04:09:16.920851106Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 04:09:16.926430 containerd[1631]: time="2025-12-16T04:09:16.920867226Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 04:09:16.926430 containerd[1631]: time="2025-12-16T04:09:16.920880679Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 04:09:16.926430 containerd[1631]: time="2025-12-16T04:09:16.920895144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 04:09:16.926430 containerd[1631]: time="2025-12-16T04:09:16.920910864Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 04:09:16.926430 containerd[1631]: time="2025-12-16T04:09:16.920928521Z" level=info msg="runtime interface created" Dec 16 04:09:16.926430 containerd[1631]: time="2025-12-16T04:09:16.920938047Z" level=info msg="created NRI interface" Dec 16 04:09:16.926430 containerd[1631]: time="2025-12-16T04:09:16.920950303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 04:09:16.926430 containerd[1631]: time="2025-12-16T04:09:16.920984843Z" level=info msg="Connect containerd service" Dec 16 04:09:16.926430 containerd[1631]: time="2025-12-16T04:09:16.921048807Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 04:09:16.926430 containerd[1631]: time="2025-12-16T04:09:16.925038702Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 04:09:16.940953 locksmithd[1651]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 04:09:16.945049 systemd-logind[1613]: New seat seat0. Dec 16 04:09:16.955294 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 04:09:16.969577 dbus-daemon[1600]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 04:09:16.985476 dbus-daemon[1600]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.9' (uid=0 pid=1638 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 04:09:17.074151 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 04:09:17.076039 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 04:09:17.216041 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 04:09:17.224409 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 04:09:17.338020 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 04:09:17.344611 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 04:09:17.360242 containerd[1631]: time="2025-12-16T04:09:17.360188363Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 04:09:17.365984 containerd[1631]: time="2025-12-16T04:09:17.365128830Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 04:09:17.365984 containerd[1631]: time="2025-12-16T04:09:17.360530637Z" level=info msg="Start subscribing containerd event" Dec 16 04:09:17.365984 containerd[1631]: time="2025-12-16T04:09:17.365209621Z" level=info msg="Start recovering state" Dec 16 04:09:17.365984 containerd[1631]: time="2025-12-16T04:09:17.365410700Z" level=info msg="Start event monitor" Dec 16 04:09:17.365984 containerd[1631]: time="2025-12-16T04:09:17.365433994Z" level=info msg="Start cni network conf syncer for default" Dec 16 04:09:17.365984 containerd[1631]: time="2025-12-16T04:09:17.365449198Z" level=info msg="Start streaming server" Dec 16 04:09:17.365984 containerd[1631]: time="2025-12-16T04:09:17.365462503Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 04:09:17.365984 containerd[1631]: time="2025-12-16T04:09:17.365475697Z" level=info msg="runtime interface starting up..." Dec 16 04:09:17.365984 containerd[1631]: time="2025-12-16T04:09:17.365486488Z" level=info msg="starting plugins..." Dec 16 04:09:17.365984 containerd[1631]: time="2025-12-16T04:09:17.365510447Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 04:09:17.365984 containerd[1631]: time="2025-12-16T04:09:17.365690164Z" level=info msg="containerd successfully booted in 0.609025s" Dec 16 04:09:17.401915 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 04:09:17.405578 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 04:09:17.598245 polkitd[1738]: Started polkitd version 126 Dec 16 04:09:17.604715 polkitd[1738]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 04:09:17.605562 polkitd[1738]: Loading rules from directory /run/polkit-1/rules.d Dec 16 04:09:17.605718 polkitd[1738]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 04:09:17.606141 polkitd[1738]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 04:09:17.606279 polkitd[1738]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 04:09:17.606428 polkitd[1738]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 04:09:17.607269 polkitd[1738]: Finished loading, compiling and executing 2 rules Dec 16 04:09:17.607806 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 04:09:17.610601 dbus-daemon[1600]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 04:09:17.611086 polkitd[1738]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 04:09:17.634260 systemd-hostnamed[1638]: Hostname set to (static) Dec 16 04:09:17.643110 systemd-timesyncd[1526]: Network configuration changed, trying to establish connection. Dec 16 04:09:17.646633 systemd-networkd[1555]: eth0: Ignoring DHCPv6 address 2a02:1348:179:914b:24:19ff:fee6:452e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:914b:24:19ff:fee6:452e/64 assigned by NDisc. Dec 16 04:09:17.646642 systemd-networkd[1555]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 16 04:09:17.692056 sshd[1700]: Accepted publickey for core from 139.178.89.65 port 49908 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:09:17.696570 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:09:17.733174 systemd-logind[1613]: New session 1 of user core. Dec 16 04:09:17.736924 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 04:09:17.741868 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 04:09:17.803991 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 04:09:17.814143 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 04:09:17.936512 tar[1620]: linux-amd64/README.md Dec 16 04:09:17.939269 (systemd)[1757]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:09:17.946769 systemd-logind[1613]: New session 2 of user core. Dec 16 04:09:17.960696 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 04:09:18.179765 systemd[1757]: Queued start job for default target default.target. Dec 16 04:09:18.190435 systemd[1757]: Created slice app.slice - User Application Slice. Dec 16 04:09:18.190484 systemd[1757]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 04:09:18.190508 systemd[1757]: Reached target paths.target - Paths. Dec 16 04:09:18.190594 systemd[1757]: Reached target timers.target - Timers. Dec 16 04:09:18.194546 systemd[1757]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 04:09:18.195858 systemd[1757]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 04:09:18.222693 systemd[1757]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 04:09:18.222849 systemd[1757]: Reached target sockets.target - Sockets. Dec 16 04:09:18.226154 systemd[1757]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 04:09:18.226319 systemd[1757]: Reached target basic.target - Basic System. Dec 16 04:09:18.226444 systemd[1757]: Reached target default.target - Main User Target. Dec 16 04:09:18.226516 systemd[1757]: Startup finished in 239ms. Dec 16 04:09:18.226846 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 04:09:18.242828 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 04:09:18.260409 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 04:09:18.285674 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 04:09:18.709638 systemd[1]: Started sshd@1-10.230.69.46:22-139.178.89.65:49922.service - OpenSSH per-connection server daemon (139.178.89.65:49922). Dec 16 04:09:19.020874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 04:09:19.036944 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 04:09:19.336979 systemd-timesyncd[1526]: Network configuration changed, trying to establish connection. Dec 16 04:09:19.547395 sshd[1776]: Accepted publickey for core from 139.178.89.65 port 49922 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:09:19.550176 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:09:19.562453 systemd-logind[1613]: New session 3 of user core. Dec 16 04:09:19.567967 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 04:09:19.915440 kubelet[1784]: E1216 04:09:19.915333 1784 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 04:09:19.918589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 04:09:19.919032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 04:09:19.920185 systemd[1]: kubelet.service: Consumed 1.650s CPU time, 264.9M memory peak. Dec 16 04:09:20.030483 sshd[1791]: Connection closed by 139.178.89.65 port 49922 Dec 16 04:09:20.031282 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Dec 16 04:09:20.036520 systemd[1]: sshd@1-10.230.69.46:22-139.178.89.65:49922.service: Deactivated successfully. Dec 16 04:09:20.038847 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 04:09:20.041817 systemd-logind[1613]: Session 3 logged out. Waiting for processes to exit. Dec 16 04:09:20.043402 systemd-logind[1613]: Removed session 3. Dec 16 04:09:20.216590 systemd[1]: Started sshd@2-10.230.69.46:22-139.178.89.65:49930.service - OpenSSH per-connection server daemon (139.178.89.65:49930). Dec 16 04:09:20.299483 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 04:09:20.304403 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 04:09:21.102196 sshd[1798]: Accepted publickey for core from 139.178.89.65 port 49930 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:09:21.104914 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:09:21.113577 systemd-logind[1613]: New session 4 of user core. Dec 16 04:09:21.124918 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 04:09:21.589998 sshd[1804]: Connection closed by 139.178.89.65 port 49930 Dec 16 04:09:21.591026 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Dec 16 04:09:21.596785 systemd-logind[1613]: Session 4 logged out. Waiting for processes to exit. Dec 16 04:09:21.597258 systemd[1]: sshd@2-10.230.69.46:22-139.178.89.65:49930.service: Deactivated successfully. Dec 16 04:09:21.599972 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 04:09:21.603283 systemd-logind[1613]: Removed session 4. Dec 16 04:09:22.614340 login[1739]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:09:22.626932 systemd-logind[1613]: New session 5 of user core. Dec 16 04:09:22.633770 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 04:09:22.647587 login[1735]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:09:22.656845 systemd-logind[1613]: New session 6 of user core. Dec 16 04:09:22.662653 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 04:09:24.319429 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 04:09:24.319601 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 04:09:24.333304 coreos-metadata[1599]: Dec 16 04:09:24.333 WARN failed to locate config-drive, using the metadata service API instead Dec 16 04:09:24.333978 coreos-metadata[1684]: Dec 16 04:09:24.332 WARN failed to locate config-drive, using the metadata service API instead Dec 16 04:09:24.358212 coreos-metadata[1599]: Dec 16 04:09:24.357 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 16 04:09:24.359072 coreos-metadata[1684]: Dec 16 04:09:24.358 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 16 04:09:24.365097 coreos-metadata[1599]: Dec 16 04:09:24.365 INFO Fetch failed with 404: resource not found Dec 16 04:09:24.365097 coreos-metadata[1599]: Dec 16 04:09:24.365 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 04:09:24.365789 coreos-metadata[1599]: Dec 16 04:09:24.365 INFO Fetch successful Dec 16 04:09:24.365970 coreos-metadata[1599]: Dec 16 04:09:24.365 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 16 04:09:24.378492 coreos-metadata[1599]: Dec 16 04:09:24.378 INFO Fetch successful Dec 16 04:09:24.378768 coreos-metadata[1599]: Dec 16 04:09:24.378 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 16 04:09:24.379362 coreos-metadata[1684]: Dec 16 04:09:24.379 INFO Fetch successful Dec 16 04:09:24.379487 coreos-metadata[1684]: Dec 16 04:09:24.379 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 04:09:24.409759 coreos-metadata[1599]: Dec 16 04:09:24.409 INFO Fetch successful Dec 16 04:09:24.410161 coreos-metadata[1599]: Dec 16 04:09:24.410 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 16 04:09:24.411369 coreos-metadata[1684]: Dec 16 04:09:24.411 INFO Fetch successful Dec 16 04:09:24.413264 unknown[1684]: wrote ssh authorized keys file for user: core Dec 16 04:09:24.422978 coreos-metadata[1599]: Dec 16 04:09:24.422 INFO Fetch successful Dec 16 04:09:24.423520 coreos-metadata[1599]: Dec 16 04:09:24.423 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 16 04:09:24.440326 update-ssh-keys[1840]: Updated "/home/core/.ssh/authorized_keys" Dec 16 04:09:24.441288 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 04:09:24.443316 coreos-metadata[1599]: Dec 16 04:09:24.443 INFO Fetch successful Dec 16 04:09:24.448742 systemd[1]: Finished sshkeys.service. Dec 16 04:09:24.480517 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 04:09:24.481311 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 04:09:24.481587 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 04:09:24.481809 systemd[1]: Startup finished in 3.698s (kernel) + 14.894s (initrd) + 12.723s (userspace) = 31.316s. Dec 16 04:09:30.169595 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 04:09:30.171909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 04:09:30.498975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 04:09:30.514177 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 04:09:30.620856 kubelet[1857]: E1216 04:09:30.620778 1857 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 04:09:30.625116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 04:09:30.625423 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 04:09:30.626315 systemd[1]: kubelet.service: Consumed 399ms CPU time, 108.2M memory peak. Dec 16 04:09:31.738441 systemd[1]: Started sshd@3-10.230.69.46:22-139.178.89.65:37462.service - OpenSSH per-connection server daemon (139.178.89.65:37462). Dec 16 04:09:32.522241 sshd[1865]: Accepted publickey for core from 139.178.89.65 port 37462 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:09:32.524162 sshd-session[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:09:32.532693 systemd-logind[1613]: New session 7 of user core. Dec 16 04:09:32.542630 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 04:09:32.967128 sshd[1869]: Connection closed by 139.178.89.65 port 37462 Dec 16 04:09:32.966258 sshd-session[1865]: pam_unix(sshd:session): session closed for user core Dec 16 04:09:32.972524 systemd[1]: sshd@3-10.230.69.46:22-139.178.89.65:37462.service: Deactivated successfully. Dec 16 04:09:32.974109 systemd-logind[1613]: Session 7 logged out. Waiting for processes to exit. Dec 16 04:09:32.975583 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 04:09:32.978284 systemd-logind[1613]: Removed session 7. Dec 16 04:09:33.126243 systemd[1]: Started sshd@4-10.230.69.46:22-139.178.89.65:37472.service - OpenSSH per-connection server daemon (139.178.89.65:37472). Dec 16 04:09:33.904418 sshd[1875]: Accepted publickey for core from 139.178.89.65 port 37472 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:09:33.906242 sshd-session[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:09:33.915432 systemd-logind[1613]: New session 8 of user core. Dec 16 04:09:33.921704 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 04:09:34.340258 sshd[1879]: Connection closed by 139.178.89.65 port 37472 Dec 16 04:09:34.341325 sshd-session[1875]: pam_unix(sshd:session): session closed for user core Dec 16 04:09:34.348264 systemd[1]: sshd@4-10.230.69.46:22-139.178.89.65:37472.service: Deactivated successfully. Dec 16 04:09:34.350737 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 04:09:34.352378 systemd-logind[1613]: Session 8 logged out. Waiting for processes to exit. Dec 16 04:09:34.354027 systemd-logind[1613]: Removed session 8. Dec 16 04:09:34.526327 systemd[1]: Started sshd@5-10.230.69.46:22-139.178.89.65:37474.service - OpenSSH per-connection server daemon (139.178.89.65:37474). Dec 16 04:09:35.393419 sshd[1885]: Accepted publickey for core from 139.178.89.65 port 37474 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:09:35.396009 sshd-session[1885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:09:35.403097 systemd-logind[1613]: New session 9 of user core. Dec 16 04:09:35.414874 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 04:09:35.877634 sshd[1889]: Connection closed by 139.178.89.65 port 37474 Dec 16 04:09:35.878602 sshd-session[1885]: pam_unix(sshd:session): session closed for user core Dec 16 04:09:35.885418 systemd[1]: sshd@5-10.230.69.46:22-139.178.89.65:37474.service: Deactivated successfully. Dec 16 04:09:35.888000 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 04:09:35.889599 systemd-logind[1613]: Session 9 logged out. Waiting for processes to exit. Dec 16 04:09:35.891327 systemd-logind[1613]: Removed session 9. Dec 16 04:09:36.030075 systemd[1]: Started sshd@6-10.230.69.46:22-139.178.89.65:37486.service - OpenSSH per-connection server daemon (139.178.89.65:37486). Dec 16 04:09:36.817060 sshd[1895]: Accepted publickey for core from 139.178.89.65 port 37486 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:09:36.818983 sshd-session[1895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:09:36.827455 systemd-logind[1613]: New session 10 of user core. Dec 16 04:09:36.833650 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 04:09:37.128555 sudo[1900]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 04:09:37.129021 sudo[1900]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 04:09:37.143307 sudo[1900]: pam_unix(sudo:session): session closed for user root Dec 16 04:09:37.287912 sshd[1899]: Connection closed by 139.178.89.65 port 37486 Dec 16 04:09:37.289004 sshd-session[1895]: pam_unix(sshd:session): session closed for user core Dec 16 04:09:37.295136 systemd[1]: sshd@6-10.230.69.46:22-139.178.89.65:37486.service: Deactivated successfully. Dec 16 04:09:37.297509 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 04:09:37.298741 systemd-logind[1613]: Session 10 logged out. Waiting for processes to exit. Dec 16 04:09:37.300897 systemd-logind[1613]: Removed session 10. Dec 16 04:09:37.457729 systemd[1]: Started sshd@7-10.230.69.46:22-139.178.89.65:37502.service - OpenSSH per-connection server daemon (139.178.89.65:37502). Dec 16 04:09:38.246503 sshd[1907]: Accepted publickey for core from 139.178.89.65 port 37502 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:09:38.248426 sshd-session[1907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:09:38.256504 systemd-logind[1613]: New session 11 of user core. Dec 16 04:09:38.263673 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 04:09:38.549672 sudo[1913]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 04:09:38.550133 sudo[1913]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 04:09:38.578804 sudo[1913]: pam_unix(sudo:session): session closed for user root Dec 16 04:09:38.589656 sudo[1912]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 04:09:38.590141 sudo[1912]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 04:09:38.601414 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 04:09:38.649000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 04:09:38.650774 kernel: kauditd_printk_skb: 117 callbacks suppressed Dec 16 04:09:38.650851 kernel: audit: type=1305 audit(1765858178.649:224): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 04:09:38.651412 augenrules[1937]: No rules Dec 16 04:09:38.652891 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 04:09:38.653527 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 04:09:38.655045 sudo[1912]: pam_unix(sudo:session): session closed for user root Dec 16 04:09:38.649000 audit[1937]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffdb276560 a2=420 a3=0 items=0 ppid=1918 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:38.649000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 04:09:38.661462 kernel: audit: type=1300 audit(1765858178.649:224): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffdb276560 a2=420 a3=0 items=0 ppid=1918 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:38.661533 kernel: audit: type=1327 audit(1765858178.649:224): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 04:09:38.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:38.664354 kernel: audit: type=1130 audit(1765858178.653:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:38.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:38.668092 kernel: audit: type=1131 audit(1765858178.653:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:38.655000 audit[1912]: USER_END pid=1912 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 04:09:38.671995 kernel: audit: type=1106 audit(1765858178.655:227): pid=1912 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 04:09:38.655000 audit[1912]: CRED_DISP pid=1912 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 04:09:38.676157 kernel: audit: type=1104 audit(1765858178.655:228): pid=1912 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 04:09:38.805553 sshd[1911]: Connection closed by 139.178.89.65 port 37502 Dec 16 04:09:38.806335 sshd-session[1907]: pam_unix(sshd:session): session closed for user core Dec 16 04:09:38.808000 audit[1907]: USER_END pid=1907 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:09:38.815483 kernel: audit: type=1106 audit(1765858178.808:229): pid=1907 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:09:38.815379 systemd[1]: sshd@7-10.230.69.46:22-139.178.89.65:37502.service: Deactivated successfully. Dec 16 04:09:38.808000 audit[1907]: CRED_DISP pid=1907 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:09:38.820424 kernel: audit: type=1104 audit(1765858178.808:230): pid=1907 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:09:38.817869 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 04:09:38.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.69.46:22-139.178.89.65:37502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:38.825775 systemd-logind[1613]: Session 11 logged out. Waiting for processes to exit. Dec 16 04:09:38.826491 kernel: audit: type=1131 audit(1765858178.815:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.69.46:22-139.178.89.65:37502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:38.827451 systemd-logind[1613]: Removed session 11. Dec 16 04:09:38.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.69.46:22-139.178.89.65:37516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:38.996821 systemd[1]: Started sshd@8-10.230.69.46:22-139.178.89.65:37516.service - OpenSSH per-connection server daemon (139.178.89.65:37516). Dec 16 04:09:39.863000 audit[1946]: USER_ACCT pid=1946 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:09:39.864144 sshd[1946]: Accepted publickey for core from 139.178.89.65 port 37516 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:09:39.864000 audit[1946]: CRED_ACQ pid=1946 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:09:39.864000 audit[1946]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeaed8ba40 a2=3 a3=0 items=0 ppid=1 pid=1946 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:39.864000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:09:39.866130 sshd-session[1946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:09:39.874266 systemd-logind[1613]: New session 12 of user core. Dec 16 04:09:39.881603 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 04:09:39.886000 audit[1946]: USER_START pid=1946 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:09:39.889000 audit[1950]: CRED_ACQ pid=1950 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:09:40.195000 audit[1951]: USER_ACCT pid=1951 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 04:09:40.195000 audit[1951]: CRED_REFR pid=1951 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 04:09:40.195000 audit[1951]: USER_START pid=1951 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 04:09:40.195716 sudo[1951]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 04:09:40.196187 sudo[1951]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 04:09:40.828301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 04:09:40.833637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 04:09:40.917779 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 04:09:40.929926 (dockerd)[1972]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 04:09:41.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:41.116156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 04:09:41.128007 (kubelet)[1978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 04:09:41.217330 kubelet[1978]: E1216 04:09:41.215955 1978 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 04:09:41.220042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 04:09:41.220286 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 04:09:41.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 04:09:41.221248 systemd[1]: kubelet.service: Consumed 260ms CPU time, 110.7M memory peak. Dec 16 04:09:41.631615 dockerd[1972]: time="2025-12-16T04:09:41.631530846Z" level=info msg="Starting up" Dec 16 04:09:41.632574 dockerd[1972]: time="2025-12-16T04:09:41.632538978Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 04:09:41.657710 dockerd[1972]: time="2025-12-16T04:09:41.657655177Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 04:09:41.680819 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1030113280-merged.mount: Deactivated successfully. Dec 16 04:09:41.715614 dockerd[1972]: time="2025-12-16T04:09:41.715548584Z" level=info msg="Loading containers: start." Dec 16 04:09:41.729447 kernel: Initializing XFRM netlink socket Dec 16 04:09:41.816000 audit[2035]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.816000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd07bb5c90 a2=0 a3=0 items=0 ppid=1972 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.816000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 04:09:41.819000 audit[2037]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.819000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffba2e04a0 a2=0 a3=0 items=0 ppid=1972 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.819000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 04:09:41.822000 audit[2039]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.822000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd78a31760 a2=0 a3=0 items=0 ppid=1972 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.822000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 04:09:41.825000 audit[2041]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.825000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6472f820 a2=0 a3=0 items=0 ppid=1972 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.825000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 04:09:41.828000 audit[2043]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.828000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdabab4e90 a2=0 a3=0 items=0 ppid=1972 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.828000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 04:09:41.831000 audit[2045]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.831000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff242cc440 a2=0 a3=0 items=0 ppid=1972 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.831000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 04:09:41.835000 audit[2047]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.835000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc449b5380 a2=0 a3=0 items=0 ppid=1972 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.835000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 04:09:41.838000 audit[2049]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.838000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffee365e140 a2=0 a3=0 items=0 ppid=1972 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.838000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 04:09:41.875000 audit[2052]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.875000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffdea853060 a2=0 a3=0 items=0 ppid=1972 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.875000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 04:09:41.879000 audit[2054]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.879000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcf755ef20 a2=0 a3=0 items=0 ppid=1972 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.879000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 04:09:41.882000 audit[2056]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.882000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc5303d6c0 a2=0 a3=0 items=0 ppid=1972 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.882000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 04:09:41.886000 audit[2058]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.886000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffedfa1f1d0 a2=0 a3=0 items=0 ppid=1972 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.886000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 04:09:41.889000 audit[2060]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.889000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc4f845710 a2=0 a3=0 items=0 ppid=1972 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.889000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 04:09:41.943000 audit[2090]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:41.943000 audit[2090]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd65164e30 a2=0 a3=0 items=0 ppid=1972 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.943000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 04:09:41.947000 audit[2092]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:41.947000 audit[2092]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc52723130 a2=0 a3=0 items=0 ppid=1972 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.947000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 04:09:41.950000 audit[2094]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:41.950000 audit[2094]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd6e27c00 a2=0 a3=0 items=0 ppid=1972 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.950000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 04:09:41.952000 audit[2096]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:41.952000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcba452470 a2=0 a3=0 items=0 ppid=1972 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.952000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 04:09:41.955000 audit[2098]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:41.955000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc273dc1e0 a2=0 a3=0 items=0 ppid=1972 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.955000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 04:09:41.958000 audit[2100]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:41.958000 audit[2100]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe770cd830 a2=0 a3=0 items=0 ppid=1972 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.958000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 04:09:41.962000 audit[2102]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:41.962000 audit[2102]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeb2f4a390 a2=0 a3=0 items=0 ppid=1972 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.962000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 04:09:41.965000 audit[2104]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:41.965000 audit[2104]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff67e21ad0 a2=0 a3=0 items=0 ppid=1972 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 04:09:41.969000 audit[2106]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:41.969000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc3e7df7c0 a2=0 a3=0 items=0 ppid=1972 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.969000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 04:09:41.973000 audit[2108]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:41.973000 audit[2108]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcffbd6d20 a2=0 a3=0 items=0 ppid=1972 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.973000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 04:09:41.976000 audit[2110]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:41.976000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcce1f46b0 a2=0 a3=0 items=0 ppid=1972 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.976000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 04:09:41.979000 audit[2112]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:41.979000 audit[2112]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd92a949a0 a2=0 a3=0 items=0 ppid=1972 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.979000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 04:09:41.983000 audit[2114]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:41.983000 audit[2114]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcd8df6050 a2=0 a3=0 items=0 ppid=1972 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.983000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 04:09:41.992000 audit[2119]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.992000 audit[2119]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff82914790 a2=0 a3=0 items=0 ppid=1972 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.992000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 04:09:41.995000 audit[2121]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.995000 audit[2121]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcc87bf9c0 a2=0 a3=0 items=0 ppid=1972 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.995000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 04:09:41.998000 audit[2123]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:41.998000 audit[2123]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdea8df860 a2=0 a3=0 items=0 ppid=1972 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:41.998000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 04:09:42.001000 audit[2125]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:42.001000 audit[2125]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd8de9b080 a2=0 a3=0 items=0 ppid=1972 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:42.001000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 04:09:42.006000 audit[2127]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:42.006000 audit[2127]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcfebbb320 a2=0 a3=0 items=0 ppid=1972 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:42.006000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 04:09:42.010000 audit[2129]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:09:42.010000 audit[2129]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc507c2d90 a2=0 a3=0 items=0 ppid=1972 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:42.010000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 04:09:42.018613 systemd-timesyncd[1526]: Network configuration changed, trying to establish connection. Dec 16 04:09:42.039000 audit[2134]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:42.039000 audit[2134]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc550c53e0 a2=0 a3=0 items=0 ppid=1972 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:42.039000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 04:09:42.043000 audit[2136]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:42.043000 audit[2136]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffce07ff170 a2=0 a3=0 items=0 ppid=1972 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:42.043000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 04:09:42.056000 audit[2144]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:42.056000 audit[2144]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fffc67f60e0 a2=0 a3=0 items=0 ppid=1972 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:42.056000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 04:09:42.070000 audit[2150]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:42.070000 audit[2150]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc590ff630 a2=0 a3=0 items=0 ppid=1972 pid=2150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:42.070000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 04:09:42.074000 audit[2152]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:42.074000 audit[2152]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffcf30dfd50 a2=0 a3=0 items=0 ppid=1972 pid=2152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:42.074000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 04:09:42.077000 audit[2154]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:42.077000 audit[2154]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd563e5d40 a2=0 a3=0 items=0 ppid=1972 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:42.077000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 04:09:42.081000 audit[2156]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:42.081000 audit[2156]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff5047ea20 a2=0 a3=0 items=0 ppid=1972 pid=2156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:42.081000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 04:09:42.084000 audit[2158]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:09:42.084000 audit[2158]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc7c5fd500 a2=0 a3=0 items=0 ppid=1972 pid=2158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:09:42.084000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 04:09:42.085835 systemd-networkd[1555]: docker0: Link UP Dec 16 04:09:42.089787 dockerd[1972]: time="2025-12-16T04:09:42.089686088Z" level=info msg="Loading containers: done." Dec 16 04:09:42.118331 dockerd[1972]: time="2025-12-16T04:09:42.117822709Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 04:09:42.118331 dockerd[1972]: time="2025-12-16T04:09:42.117960454Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 04:09:42.118331 dockerd[1972]: time="2025-12-16T04:09:42.118093570Z" level=info msg="Initializing buildkit" Dec 16 04:09:42.144201 dockerd[1972]: time="2025-12-16T04:09:42.144053829Z" level=info msg="Completed buildkit initialization" Dec 16 04:09:42.154791 dockerd[1972]: time="2025-12-16T04:09:42.154718432Z" level=info msg="Daemon has completed initialization" Dec 16 04:09:42.155149 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 04:09:42.156168 dockerd[1972]: time="2025-12-16T04:09:42.156111898Z" level=info msg="API listen on /run/docker.sock" Dec 16 04:09:42.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:42.260099 systemd-timesyncd[1526]: Contacted time server [2a01:7e00::f03c:94ff:fe78:a7d1]:123 (2.flatcar.pool.ntp.org). Dec 16 04:09:42.260213 systemd-timesyncd[1526]: Initial clock synchronization to Tue 2025-12-16 04:09:41.993315 UTC. Dec 16 04:09:42.676394 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4259762719-merged.mount: Deactivated successfully. Dec 16 04:09:43.345775 containerd[1631]: time="2025-12-16T04:09:43.345457406Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 04:09:44.657037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3659498896.mount: Deactivated successfully. Dec 16 04:09:46.845304 containerd[1631]: time="2025-12-16T04:09:46.845218990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:09:46.861644 containerd[1631]: time="2025-12-16T04:09:46.861548485Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=27403437" Dec 16 04:09:46.862834 containerd[1631]: time="2025-12-16T04:09:46.862776869Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:09:46.868405 containerd[1631]: time="2025-12-16T04:09:46.867208856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:09:46.868692 containerd[1631]: time="2025-12-16T04:09:46.868657322Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 3.522898039s" Dec 16 04:09:46.868846 containerd[1631]: time="2025-12-16T04:09:46.868817703Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 04:09:46.870135 containerd[1631]: time="2025-12-16T04:09:46.870091997Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 04:09:47.672573 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 04:09:47.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:47.676530 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 16 04:09:47.676680 kernel: audit: type=1131 audit(1765858187.671:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:47.695000 audit: BPF prog-id=61 op=UNLOAD Dec 16 04:09:47.697431 kernel: audit: type=1334 audit(1765858187.695:285): prog-id=61 op=UNLOAD Dec 16 04:09:50.525100 containerd[1631]: time="2025-12-16T04:09:50.525032203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:09:50.526680 containerd[1631]: time="2025-12-16T04:09:50.526611728Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24986607" Dec 16 04:09:50.528401 containerd[1631]: time="2025-12-16T04:09:50.527300531Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:09:50.540626 containerd[1631]: time="2025-12-16T04:09:50.539794754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:09:50.541329 containerd[1631]: time="2025-12-16T04:09:50.541290424Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 3.671156496s" Dec 16 04:09:50.541426 containerd[1631]: time="2025-12-16T04:09:50.541331076Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 04:09:50.542697 containerd[1631]: time="2025-12-16T04:09:50.542669606Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 04:09:51.329034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 04:09:51.335638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 04:09:51.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:51.749734 kernel: audit: type=1130 audit(1765858191.740:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:09:51.741752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 04:09:51.776329 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 04:09:52.117332 kubelet[2271]: E1216 04:09:52.117263 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 04:09:52.120773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 04:09:52.121027 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 04:09:52.131533 kernel: audit: type=1131 audit(1765858192.120:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 04:09:52.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 04:09:52.122257 systemd[1]: kubelet.service: Consumed 270ms CPU time, 110.5M memory peak. Dec 16 04:09:53.220169 containerd[1631]: time="2025-12-16T04:09:53.219642868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:09:53.224977 containerd[1631]: time="2025-12-16T04:09:53.224887944Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19396111" Dec 16 04:09:53.225943 containerd[1631]: time="2025-12-16T04:09:53.225898646Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:09:53.232145 containerd[1631]: time="2025-12-16T04:09:53.232083505Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 2.689226129s" Dec 16 04:09:53.232326 containerd[1631]: time="2025-12-16T04:09:53.232202420Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 04:09:53.232453 containerd[1631]: time="2025-12-16T04:09:53.232413535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:09:53.235174 containerd[1631]: time="2025-12-16T04:09:53.235125262Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 04:09:56.756181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1136575841.mount: Deactivated successfully. Dec 16 04:09:57.928946 containerd[1631]: time="2025-12-16T04:09:57.928871718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:09:57.930282 containerd[1631]: time="2025-12-16T04:09:57.930036752Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31157702" Dec 16 04:09:57.931488 containerd[1631]: time="2025-12-16T04:09:57.931434393Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:09:57.934092 containerd[1631]: time="2025-12-16T04:09:57.934056732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:09:57.935355 containerd[1631]: time="2025-12-16T04:09:57.935298879Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 4.700109039s" Dec 16 04:09:57.935510 containerd[1631]: time="2025-12-16T04:09:57.935481293Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 04:09:57.936650 containerd[1631]: time="2025-12-16T04:09:57.936585531Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 04:09:58.656778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1188754619.mount: Deactivated successfully. Dec 16 04:10:00.407402 containerd[1631]: time="2025-12-16T04:10:00.407269634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:10:00.409123 containerd[1631]: time="2025-12-16T04:10:00.408818168Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Dec 16 04:10:00.409897 containerd[1631]: time="2025-12-16T04:10:00.409860119Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:10:00.426977 containerd[1631]: time="2025-12-16T04:10:00.426918190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:10:00.428501 containerd[1631]: time="2025-12-16T04:10:00.428463225Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.49183024s" Dec 16 04:10:00.428768 containerd[1631]: time="2025-12-16T04:10:00.428619623Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 04:10:00.429341 containerd[1631]: time="2025-12-16T04:10:00.429300297Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 04:10:01.179405 update_engine[1614]: I20251216 04:10:01.179138 1614 update_attempter.cc:509] Updating boot flags... Dec 16 04:10:01.679689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2235323539.mount: Deactivated successfully. Dec 16 04:10:01.685995 containerd[1631]: time="2025-12-16T04:10:01.685922055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 04:10:01.687489 containerd[1631]: time="2025-12-16T04:10:01.687147771Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 04:10:01.688425 containerd[1631]: time="2025-12-16T04:10:01.688369825Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 04:10:01.694783 containerd[1631]: time="2025-12-16T04:10:01.694720974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 04:10:01.696583 containerd[1631]: time="2025-12-16T04:10:01.696537443Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.267074006s" Dec 16 04:10:01.696692 containerd[1631]: time="2025-12-16T04:10:01.696587445Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 04:10:01.698441 containerd[1631]: time="2025-12-16T04:10:01.698405893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 04:10:02.328691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 04:10:02.331696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 04:10:02.706270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4007748335.mount: Deactivated successfully. Dec 16 04:10:02.874774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 04:10:02.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:02.898047 kernel: audit: type=1130 audit(1765858202.874:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:02.914074 (kubelet)[2379]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 04:10:03.145764 kubelet[2379]: E1216 04:10:03.145687 2379 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 04:10:03.148819 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 04:10:03.149068 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 04:10:03.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 04:10:03.149714 systemd[1]: kubelet.service: Consumed 542ms CPU time, 109.1M memory peak. Dec 16 04:10:03.154401 kernel: audit: type=1131 audit(1765858203.149:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 04:10:06.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.69.46:22-80.94.95.116:62616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:06.773177 systemd[1]: Started sshd@9-10.230.69.46:22-80.94.95.116:62616.service - OpenSSH per-connection server daemon (80.94.95.116:62616). Dec 16 04:10:06.797577 kernel: audit: type=1130 audit(1765858206.773:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.69.46:22-80.94.95.116:62616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:09.225467 sshd[2427]: Connection closed by authenticating user root 80.94.95.116 port 62616 [preauth] Dec 16 04:10:09.235393 kernel: audit: type=1109 audit(1765858209.226:291): pid=2427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=80.94.95.116 addr=80.94.95.116 terminal=ssh res=failed' Dec 16 04:10:09.226000 audit[2427]: USER_ERR pid=2427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=80.94.95.116 addr=80.94.95.116 terminal=ssh res=failed' Dec 16 04:10:09.238309 systemd[1]: sshd@9-10.230.69.46:22-80.94.95.116:62616.service: Deactivated successfully. Dec 16 04:10:09.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.69.46:22-80.94.95.116:62616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:09.245405 kernel: audit: type=1131 audit(1765858209.239:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.69.46:22-80.94.95.116:62616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:09.572531 containerd[1631]: time="2025-12-16T04:10:09.571890377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:10:09.575359 containerd[1631]: time="2025-12-16T04:10:09.575294316Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55732116" Dec 16 04:10:09.576110 containerd[1631]: time="2025-12-16T04:10:09.576041549Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:10:09.581194 containerd[1631]: time="2025-12-16T04:10:09.581139084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:10:09.583394 containerd[1631]: time="2025-12-16T04:10:09.583211386Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 7.88476671s" Dec 16 04:10:09.583394 containerd[1631]: time="2025-12-16T04:10:09.583266978Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 04:10:13.328617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 16 04:10:13.333618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 04:10:14.151225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 04:10:14.160499 kernel: audit: type=1130 audit(1765858214.151:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:14.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:14.174694 (kubelet)[2466]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 04:10:14.254339 kubelet[2466]: E1216 04:10:14.254229 2466 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 04:10:14.257849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 04:10:14.258352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 04:10:14.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 04:10:14.263449 systemd[1]: kubelet.service: Consumed 236ms CPU time, 110.4M memory peak. Dec 16 04:10:14.264958 kernel: audit: type=1131 audit(1765858214.259:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 04:10:14.816271 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 04:10:14.817583 systemd[1]: kubelet.service: Consumed 236ms CPU time, 110.4M memory peak. Dec 16 04:10:14.826685 kernel: audit: type=1130 audit(1765858214.816:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:14.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:14.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:14.832524 kernel: audit: type=1131 audit(1765858214.816:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:14.832958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 04:10:14.873053 systemd[1]: Reload requested from client PID 2480 ('systemctl') (unit session-12.scope)... Dec 16 04:10:14.873331 systemd[1]: Reloading... Dec 16 04:10:15.065406 zram_generator::config[2528]: No configuration found. Dec 16 04:10:15.423693 systemd[1]: Reloading finished in 549 ms. Dec 16 04:10:15.466429 kernel: audit: type=1334 audit(1765858215.456:297): prog-id=65 op=LOAD Dec 16 04:10:15.456000 audit: BPF prog-id=65 op=LOAD Dec 16 04:10:15.456000 audit: BPF prog-id=57 op=UNLOAD Dec 16 04:10:15.470408 kernel: audit: type=1334 audit(1765858215.456:298): prog-id=57 op=UNLOAD Dec 16 04:10:15.462000 audit: BPF prog-id=66 op=LOAD Dec 16 04:10:15.462000 audit: BPF prog-id=48 op=UNLOAD Dec 16 04:10:15.472861 kernel: audit: type=1334 audit(1765858215.462:299): prog-id=66 op=LOAD Dec 16 04:10:15.472938 kernel: audit: type=1334 audit(1765858215.462:300): prog-id=48 op=UNLOAD Dec 16 04:10:15.462000 audit: BPF prog-id=67 op=LOAD Dec 16 04:10:15.475081 kernel: audit: type=1334 audit(1765858215.462:301): prog-id=67 op=LOAD Dec 16 04:10:15.475191 kernel: audit: type=1334 audit(1765858215.462:302): prog-id=68 op=LOAD Dec 16 04:10:15.462000 audit: BPF prog-id=68 op=LOAD Dec 16 04:10:15.462000 audit: BPF prog-id=49 op=UNLOAD Dec 16 04:10:15.478399 kernel: audit: type=1334 audit(1765858215.462:303): prog-id=49 op=UNLOAD Dec 16 04:10:15.462000 audit: BPF prog-id=50 op=UNLOAD Dec 16 04:10:15.463000 audit: BPF prog-id=69 op=LOAD Dec 16 04:10:15.463000 audit: BPF prog-id=51 op=UNLOAD Dec 16 04:10:15.463000 audit: BPF prog-id=70 op=LOAD Dec 16 04:10:15.463000 audit: BPF prog-id=71 op=LOAD Dec 16 04:10:15.464000 audit: BPF prog-id=52 op=UNLOAD Dec 16 04:10:15.464000 audit: BPF prog-id=53 op=UNLOAD Dec 16 04:10:15.465000 audit: BPF prog-id=72 op=LOAD Dec 16 04:10:15.465000 audit: BPF prog-id=41 op=UNLOAD Dec 16 04:10:15.466000 audit: BPF prog-id=73 op=LOAD Dec 16 04:10:15.466000 audit: BPF prog-id=74 op=LOAD Dec 16 04:10:15.466000 audit: BPF prog-id=42 op=UNLOAD Dec 16 04:10:15.466000 audit: BPF prog-id=43 op=UNLOAD Dec 16 04:10:15.478000 audit: BPF prog-id=75 op=LOAD Dec 16 04:10:15.478000 audit: BPF prog-id=56 op=UNLOAD Dec 16 04:10:15.479000 audit: BPF prog-id=76 op=LOAD Dec 16 04:10:15.479000 audit: BPF prog-id=45 op=UNLOAD Dec 16 04:10:15.480000 audit: BPF prog-id=77 op=LOAD Dec 16 04:10:15.480000 audit: BPF prog-id=78 op=LOAD Dec 16 04:10:15.480000 audit: BPF prog-id=46 op=UNLOAD Dec 16 04:10:15.480000 audit: BPF prog-id=47 op=UNLOAD Dec 16 04:10:15.481000 audit: BPF prog-id=79 op=LOAD Dec 16 04:10:15.481000 audit: BPF prog-id=44 op=UNLOAD Dec 16 04:10:15.484000 audit: BPF prog-id=80 op=LOAD Dec 16 04:10:15.484000 audit: BPF prog-id=64 op=UNLOAD Dec 16 04:10:15.486000 audit: BPF prog-id=81 op=LOAD Dec 16 04:10:15.486000 audit: BPF prog-id=58 op=UNLOAD Dec 16 04:10:15.486000 audit: BPF prog-id=82 op=LOAD Dec 16 04:10:15.486000 audit: BPF prog-id=83 op=LOAD Dec 16 04:10:15.486000 audit: BPF prog-id=59 op=UNLOAD Dec 16 04:10:15.487000 audit: BPF prog-id=60 op=UNLOAD Dec 16 04:10:15.487000 audit: BPF prog-id=84 op=LOAD Dec 16 04:10:15.487000 audit: BPF prog-id=85 op=LOAD Dec 16 04:10:15.487000 audit: BPF prog-id=54 op=UNLOAD Dec 16 04:10:15.487000 audit: BPF prog-id=55 op=UNLOAD Dec 16 04:10:15.511911 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 04:10:15.512528 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 04:10:15.513159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 04:10:15.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 04:10:15.513662 systemd[1]: kubelet.service: Consumed 164ms CPU time, 97.8M memory peak. Dec 16 04:10:15.518042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 04:10:15.713689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 04:10:15.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:15.728160 (kubelet)[2596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 04:10:15.869137 kubelet[2596]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 04:10:15.869137 kubelet[2596]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 04:10:15.869137 kubelet[2596]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 04:10:15.870231 kubelet[2596]: I1216 04:10:15.870153 2596 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 04:10:16.449162 kubelet[2596]: I1216 04:10:16.449096 2596 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 04:10:16.449162 kubelet[2596]: I1216 04:10:16.449139 2596 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 04:10:16.449616 kubelet[2596]: I1216 04:10:16.449541 2596 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 04:10:16.485545 kubelet[2596]: E1216 04:10:16.485481 2596 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.69.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.69.46:6443: connect: connection refused" logger="UnhandledError" Dec 16 04:10:16.495715 kubelet[2596]: I1216 04:10:16.495463 2596 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 04:10:16.533765 kubelet[2596]: I1216 04:10:16.533723 2596 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 04:10:16.546622 kubelet[2596]: I1216 04:10:16.546563 2596 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 04:10:16.550015 kubelet[2596]: I1216 04:10:16.549928 2596 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 04:10:16.550304 kubelet[2596]: I1216 04:10:16.550006 2596 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-cuii1.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 04:10:16.551939 kubelet[2596]: I1216 04:10:16.551910 2596 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 04:10:16.551939 kubelet[2596]: I1216 04:10:16.551941 2596 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 04:10:16.553222 kubelet[2596]: I1216 04:10:16.553191 2596 state_mem.go:36] "Initialized new in-memory state store" Dec 16 04:10:16.557512 kubelet[2596]: I1216 04:10:16.557483 2596 kubelet.go:446] "Attempting to sync node with API server" Dec 16 04:10:16.557595 kubelet[2596]: I1216 04:10:16.557539 2596 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 04:10:16.559286 kubelet[2596]: I1216 04:10:16.559034 2596 kubelet.go:352] "Adding apiserver pod source" Dec 16 04:10:16.559286 kubelet[2596]: I1216 04:10:16.559084 2596 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 04:10:16.575410 kubelet[2596]: W1216 04:10:16.575285 2596 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.69.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.69.46:6443: connect: connection refused Dec 16 04:10:16.575410 kubelet[2596]: E1216 04:10:16.575412 2596 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.69.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.69.46:6443: connect: connection refused" logger="UnhandledError" Dec 16 04:10:16.575665 kubelet[2596]: W1216 04:10:16.575522 2596 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.69.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-cuii1.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.69.46:6443: connect: connection refused Dec 16 04:10:16.575665 kubelet[2596]: E1216 04:10:16.575571 2596 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.69.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-cuii1.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.69.46:6443: connect: connection refused" logger="UnhandledError" Dec 16 04:10:16.577905 kubelet[2596]: I1216 04:10:16.577580 2596 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 04:10:16.580794 kubelet[2596]: I1216 04:10:16.580733 2596 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 04:10:16.581526 kubelet[2596]: W1216 04:10:16.581490 2596 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 04:10:16.583398 kubelet[2596]: I1216 04:10:16.582582 2596 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 04:10:16.583398 kubelet[2596]: I1216 04:10:16.582642 2596 server.go:1287] "Started kubelet" Dec 16 04:10:16.583604 kubelet[2596]: I1216 04:10:16.583555 2596 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 04:10:16.585878 kubelet[2596]: I1216 04:10:16.585857 2596 server.go:479] "Adding debug handlers to kubelet server" Dec 16 04:10:16.587590 kubelet[2596]: I1216 04:10:16.587514 2596 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 04:10:16.589041 kubelet[2596]: I1216 04:10:16.587918 2596 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 04:10:16.592443 kubelet[2596]: E1216 04:10:16.589098 2596 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.69.46:6443/api/v1/namespaces/default/events\": dial tcp 10.230.69.46:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-cuii1.gb1.brightbox.com.188196b12b1c779e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-cuii1.gb1.brightbox.com,UID:srv-cuii1.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-cuii1.gb1.brightbox.com,},FirstTimestamp:2025-12-16 04:10:16.582608798 +0000 UTC m=+0.789332353,LastTimestamp:2025-12-16 04:10:16.582608798 +0000 UTC m=+0.789332353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-cuii1.gb1.brightbox.com,}" Dec 16 04:10:16.592875 kubelet[2596]: I1216 04:10:16.592849 2596 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 04:10:16.594083 kubelet[2596]: I1216 04:10:16.593506 2596 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 04:10:16.603404 kubelet[2596]: E1216 04:10:16.603353 2596 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-cuii1.gb1.brightbox.com\" not found" Dec 16 04:10:16.603539 kubelet[2596]: I1216 04:10:16.603442 2596 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 04:10:16.603884 kubelet[2596]: I1216 04:10:16.603854 2596 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 04:10:16.603987 kubelet[2596]: I1216 04:10:16.603953 2596 reconciler.go:26] "Reconciler: start to sync state" Dec 16 04:10:16.604623 kubelet[2596]: W1216 04:10:16.604574 2596 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.69.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.69.46:6443: connect: connection refused Dec 16 04:10:16.604694 kubelet[2596]: E1216 04:10:16.604635 2596 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.69.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.69.46:6443: connect: connection refused" logger="UnhandledError" Dec 16 04:10:16.605178 kubelet[2596]: E1216 04:10:16.605121 2596 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.69.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-cuii1.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.69.46:6443: connect: connection refused" interval="200ms" Dec 16 04:10:16.608467 kubelet[2596]: I1216 04:10:16.608367 2596 factory.go:221] Registration of the systemd container factory successfully Dec 16 04:10:16.609047 kubelet[2596]: I1216 04:10:16.608582 2596 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 04:10:16.608000 audit[2607]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:16.608000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe9caeaa80 a2=0 a3=0 items=0 ppid=2596 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:16.608000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 04:10:16.610951 kubelet[2596]: E1216 04:10:16.610906 2596 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 04:10:16.612000 audit[2608]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:16.612000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7e12dc30 a2=0 a3=0 items=0 ppid=2596 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:16.612000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 04:10:16.615627 kubelet[2596]: I1216 04:10:16.615596 2596 factory.go:221] Registration of the containerd container factory successfully Dec 16 04:10:16.621000 audit[2611]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:16.621000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffec1d92d30 a2=0 a3=0 items=0 ppid=2596 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:16.621000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 04:10:16.639000 audit[2615]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:16.639000 audit[2615]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdffe4c330 a2=0 a3=0 items=0 ppid=2596 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:16.639000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 04:10:16.655000 audit[2620]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2620 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:16.655000 audit[2620]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcc8a7b4a0 a2=0 a3=0 items=0 ppid=2596 pid=2620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:16.655000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 04:10:16.657844 kubelet[2596]: I1216 04:10:16.657778 2596 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 04:10:16.659000 audit[2621]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2621 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:16.659000 audit[2621]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffedcf51250 a2=0 a3=0 items=0 ppid=2596 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:16.659000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 04:10:16.660000 audit[2622]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2622 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:16.660000 audit[2622]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf0bf48d0 a2=0 a3=0 items=0 ppid=2596 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:16.660000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 04:10:16.661821 kubelet[2596]: I1216 04:10:16.660832 2596 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 04:10:16.661821 kubelet[2596]: I1216 04:10:16.660873 2596 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 04:10:16.661821 kubelet[2596]: I1216 04:10:16.660912 2596 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 04:10:16.661821 kubelet[2596]: I1216 04:10:16.660926 2596 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 04:10:16.661821 kubelet[2596]: E1216 04:10:16.661006 2596 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 04:10:16.663000 audit[2623]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2623 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:16.663000 audit[2623]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc0405a50 a2=0 a3=0 items=0 ppid=2596 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:16.663000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 04:10:16.667508 kubelet[2596]: W1216 04:10:16.667477 2596 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.69.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.69.46:6443: connect: connection refused Dec 16 04:10:16.667648 kubelet[2596]: E1216 04:10:16.667620 2596 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.69.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.69.46:6443: connect: connection refused" logger="UnhandledError" Dec 16 04:10:16.666000 audit[2625]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2625 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:16.666000 audit[2625]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6d0fac60 a2=0 a3=0 items=0 ppid=2596 pid=2625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:16.666000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 04:10:16.667000 audit[2627]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2627 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:16.667000 audit[2627]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbb0a2700 a2=0 a3=0 items=0 ppid=2596 pid=2627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:16.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 04:10:16.668000 audit[2626]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2626 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:16.668000 audit[2626]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd2c244290 a2=0 a3=0 items=0 ppid=2596 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:16.668000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 04:10:16.671900 kubelet[2596]: I1216 04:10:16.671862 2596 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 04:10:16.671900 kubelet[2596]: I1216 04:10:16.671891 2596 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 04:10:16.672031 kubelet[2596]: I1216 04:10:16.671929 2596 state_mem.go:36] "Initialized new in-memory state store" Dec 16 04:10:16.671000 audit[2628]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2628 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:16.671000 audit[2628]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe923893e0 a2=0 a3=0 items=0 ppid=2596 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:16.671000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 04:10:16.675569 kubelet[2596]: I1216 04:10:16.675538 2596 policy_none.go:49] "None policy: Start" Dec 16 04:10:16.675647 kubelet[2596]: I1216 04:10:16.675578 2596 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 04:10:16.675647 kubelet[2596]: I1216 04:10:16.675615 2596 state_mem.go:35] "Initializing new in-memory state store" Dec 16 04:10:16.684338 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 04:10:16.703973 kubelet[2596]: E1216 04:10:16.703535 2596 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-cuii1.gb1.brightbox.com\" not found" Dec 16 04:10:16.707938 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 04:10:16.733475 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 04:10:16.735945 kubelet[2596]: I1216 04:10:16.735914 2596 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 04:10:16.738644 kubelet[2596]: I1216 04:10:16.738529 2596 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 04:10:16.738644 kubelet[2596]: I1216 04:10:16.738561 2596 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 04:10:16.741568 kubelet[2596]: I1216 04:10:16.741541 2596 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 04:10:16.742690 kubelet[2596]: E1216 04:10:16.742437 2596 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 04:10:16.743251 kubelet[2596]: E1216 04:10:16.743160 2596 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-cuii1.gb1.brightbox.com\" not found" Dec 16 04:10:16.777145 systemd[1]: Created slice kubepods-burstable-pod403bf233a26370d3019d5ea568cafe20.slice - libcontainer container kubepods-burstable-pod403bf233a26370d3019d5ea568cafe20.slice. Dec 16 04:10:16.810006 kubelet[2596]: E1216 04:10:16.808109 2596 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.69.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-cuii1.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.69.46:6443: connect: connection refused" interval="400ms" Dec 16 04:10:16.810463 kubelet[2596]: E1216 04:10:16.810412 2596 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cuii1.gb1.brightbox.com\" not found" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:16.816908 systemd[1]: Created slice kubepods-burstable-pod1089326a866948ecac28069a5c5b429d.slice - libcontainer container kubepods-burstable-pod1089326a866948ecac28069a5c5b429d.slice. Dec 16 04:10:16.829905 kubelet[2596]: E1216 04:10:16.829867 2596 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cuii1.gb1.brightbox.com\" not found" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:16.832002 systemd[1]: Created slice kubepods-burstable-pod80428432a3aac2ddd5e9be9dae72a577.slice - libcontainer container kubepods-burstable-pod80428432a3aac2ddd5e9be9dae72a577.slice. Dec 16 04:10:16.835590 kubelet[2596]: E1216 04:10:16.835529 2596 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cuii1.gb1.brightbox.com\" not found" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:16.842410 kubelet[2596]: I1216 04:10:16.842357 2596 kubelet_node_status.go:75] "Attempting to register node" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:16.843091 kubelet[2596]: E1216 04:10:16.843033 2596 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.69.46:6443/api/v1/nodes\": dial tcp 10.230.69.46:6443: connect: connection refused" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:16.904466 kubelet[2596]: I1216 04:10:16.904329 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80428432a3aac2ddd5e9be9dae72a577-ca-certs\") pod \"kube-controller-manager-srv-cuii1.gb1.brightbox.com\" (UID: \"80428432a3aac2ddd5e9be9dae72a577\") " pod="kube-system/kube-controller-manager-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:16.904466 kubelet[2596]: I1216 04:10:16.904435 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80428432a3aac2ddd5e9be9dae72a577-kubeconfig\") pod \"kube-controller-manager-srv-cuii1.gb1.brightbox.com\" (UID: \"80428432a3aac2ddd5e9be9dae72a577\") " pod="kube-system/kube-controller-manager-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:16.904466 kubelet[2596]: I1216 04:10:16.904485 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1089326a866948ecac28069a5c5b429d-k8s-certs\") pod \"kube-apiserver-srv-cuii1.gb1.brightbox.com\" (UID: \"1089326a866948ecac28069a5c5b429d\") " pod="kube-system/kube-apiserver-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:16.905285 kubelet[2596]: I1216 04:10:16.904536 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1089326a866948ecac28069a5c5b429d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-cuii1.gb1.brightbox.com\" (UID: \"1089326a866948ecac28069a5c5b429d\") " pod="kube-system/kube-apiserver-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:16.905285 kubelet[2596]: I1216 04:10:16.904582 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80428432a3aac2ddd5e9be9dae72a577-flexvolume-dir\") pod \"kube-controller-manager-srv-cuii1.gb1.brightbox.com\" (UID: \"80428432a3aac2ddd5e9be9dae72a577\") " pod="kube-system/kube-controller-manager-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:16.905285 kubelet[2596]: I1216 04:10:16.904649 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80428432a3aac2ddd5e9be9dae72a577-k8s-certs\") pod \"kube-controller-manager-srv-cuii1.gb1.brightbox.com\" (UID: \"80428432a3aac2ddd5e9be9dae72a577\") " pod="kube-system/kube-controller-manager-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:16.905285 kubelet[2596]: I1216 04:10:16.904678 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80428432a3aac2ddd5e9be9dae72a577-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-cuii1.gb1.brightbox.com\" (UID: \"80428432a3aac2ddd5e9be9dae72a577\") " pod="kube-system/kube-controller-manager-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:16.905285 kubelet[2596]: I1216 04:10:16.904740 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/403bf233a26370d3019d5ea568cafe20-kubeconfig\") pod \"kube-scheduler-srv-cuii1.gb1.brightbox.com\" (UID: \"403bf233a26370d3019d5ea568cafe20\") " pod="kube-system/kube-scheduler-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:16.905613 kubelet[2596]: I1216 04:10:16.904768 2596 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1089326a866948ecac28069a5c5b429d-ca-certs\") pod \"kube-apiserver-srv-cuii1.gb1.brightbox.com\" (UID: \"1089326a866948ecac28069a5c5b429d\") " pod="kube-system/kube-apiserver-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:16.943122 kubelet[2596]: E1216 04:10:16.942967 2596 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.69.46:6443/api/v1/namespaces/default/events\": dial tcp 10.230.69.46:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-cuii1.gb1.brightbox.com.188196b12b1c779e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-cuii1.gb1.brightbox.com,UID:srv-cuii1.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-cuii1.gb1.brightbox.com,},FirstTimestamp:2025-12-16 04:10:16.582608798 +0000 UTC m=+0.789332353,LastTimestamp:2025-12-16 04:10:16.582608798 +0000 UTC m=+0.789332353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-cuii1.gb1.brightbox.com,}" Dec 16 04:10:17.046431 kubelet[2596]: I1216 04:10:17.046278 2596 kubelet_node_status.go:75] "Attempting to register node" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:17.047655 kubelet[2596]: E1216 04:10:17.047621 2596 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.69.46:6443/api/v1/nodes\": dial tcp 10.230.69.46:6443: connect: connection refused" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:17.114415 containerd[1631]: time="2025-12-16T04:10:17.114249447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-cuii1.gb1.brightbox.com,Uid:403bf233a26370d3019d5ea568cafe20,Namespace:kube-system,Attempt:0,}" Dec 16 04:10:17.131907 containerd[1631]: time="2025-12-16T04:10:17.131823444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-cuii1.gb1.brightbox.com,Uid:1089326a866948ecac28069a5c5b429d,Namespace:kube-system,Attempt:0,}" Dec 16 04:10:17.137360 containerd[1631]: time="2025-12-16T04:10:17.137313833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-cuii1.gb1.brightbox.com,Uid:80428432a3aac2ddd5e9be9dae72a577,Namespace:kube-system,Attempt:0,}" Dec 16 04:10:17.209435 kubelet[2596]: E1216 04:10:17.209340 2596 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.69.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-cuii1.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.69.46:6443: connect: connection refused" interval="800ms" Dec 16 04:10:17.266741 containerd[1631]: time="2025-12-16T04:10:17.266644280Z" level=info msg="connecting to shim e33c619f9f6248e5c844fa16b1d09dfa5373799a4a27b981717f66b5c9719096" address="unix:///run/containerd/s/781fedde0b3d9871128478d9f9e599ed7d9cdc9c05745618a6cb5689e25fd76b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 04:10:17.267335 containerd[1631]: time="2025-12-16T04:10:17.266644272Z" level=info msg="connecting to shim c58f7d51aece8688bc65b723ffa01211059320670c8560d00cc36657e7dac292" address="unix:///run/containerd/s/e2c2af4b61407c173deb3b5b831cbcea6208ff95c57546d71766f5dc5082219d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 04:10:17.277546 containerd[1631]: time="2025-12-16T04:10:17.277480485Z" level=info msg="connecting to shim 389876eb5186d2872cb483c0f801c2bdc20fc8495d9dd26255f5467c342daa74" address="unix:///run/containerd/s/61bf8e2a49753bb4558eff78b3548d2cc7eaf0649161d048f02ee1f607c38952" namespace=k8s.io protocol=ttrpc version=3 Dec 16 04:10:17.410675 systemd[1]: Started cri-containerd-e33c619f9f6248e5c844fa16b1d09dfa5373799a4a27b981717f66b5c9719096.scope - libcontainer container e33c619f9f6248e5c844fa16b1d09dfa5373799a4a27b981717f66b5c9719096. Dec 16 04:10:17.424144 systemd[1]: Started cri-containerd-389876eb5186d2872cb483c0f801c2bdc20fc8495d9dd26255f5467c342daa74.scope - libcontainer container 389876eb5186d2872cb483c0f801c2bdc20fc8495d9dd26255f5467c342daa74. Dec 16 04:10:17.427166 systemd[1]: Started cri-containerd-c58f7d51aece8688bc65b723ffa01211059320670c8560d00cc36657e7dac292.scope - libcontainer container c58f7d51aece8688bc65b723ffa01211059320670c8560d00cc36657e7dac292. Dec 16 04:10:17.452951 kubelet[2596]: I1216 04:10:17.452906 2596 kubelet_node_status.go:75] "Attempting to register node" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:17.453981 kubelet[2596]: E1216 04:10:17.453940 2596 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.69.46:6443/api/v1/nodes\": dial tcp 10.230.69.46:6443: connect: connection refused" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:17.457000 audit: BPF prog-id=86 op=LOAD Dec 16 04:10:17.458000 audit: BPF prog-id=87 op=LOAD Dec 16 04:10:17.458000 audit[2689]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2652 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533336336313966396636323438653563383434666131366231643039 Dec 16 04:10:17.458000 audit: BPF prog-id=87 op=UNLOAD Dec 16 04:10:17.458000 audit[2689]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533336336313966396636323438653563383434666131366231643039 Dec 16 04:10:17.463000 audit: BPF prog-id=88 op=LOAD Dec 16 04:10:17.463000 audit[2689]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2652 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533336336313966396636323438653563383434666131366231643039 Dec 16 04:10:17.463000 audit: BPF prog-id=89 op=LOAD Dec 16 04:10:17.463000 audit[2689]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2652 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533336336313966396636323438653563383434666131366231643039 Dec 16 04:10:17.463000 audit: BPF prog-id=89 op=UNLOAD Dec 16 04:10:17.463000 audit[2689]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533336336313966396636323438653563383434666131366231643039 Dec 16 04:10:17.463000 audit: BPF prog-id=88 op=UNLOAD Dec 16 04:10:17.463000 audit[2689]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533336336313966396636323438653563383434666131366231643039 Dec 16 04:10:17.463000 audit: BPF prog-id=90 op=LOAD Dec 16 04:10:17.463000 audit[2689]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2652 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533336336313966396636323438653563383434666131366231643039 Dec 16 04:10:17.469000 audit: BPF prog-id=91 op=LOAD Dec 16 04:10:17.470000 audit: BPF prog-id=92 op=LOAD Dec 16 04:10:17.470000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2653 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335386637643531616563653836383862633635623732336666613031 Dec 16 04:10:17.470000 audit: BPF prog-id=92 op=UNLOAD Dec 16 04:10:17.470000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2653 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335386637643531616563653836383862633635623732336666613031 Dec 16 04:10:17.471000 audit: BPF prog-id=93 op=LOAD Dec 16 04:10:17.471000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2653 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335386637643531616563653836383862633635623732336666613031 Dec 16 04:10:17.471000 audit: BPF prog-id=94 op=LOAD Dec 16 04:10:17.471000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2653 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335386637643531616563653836383862633635623732336666613031 Dec 16 04:10:17.471000 audit: BPF prog-id=94 op=UNLOAD Dec 16 04:10:17.471000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2653 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335386637643531616563653836383862633635623732336666613031 Dec 16 04:10:17.471000 audit: BPF prog-id=93 op=UNLOAD Dec 16 04:10:17.471000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2653 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335386637643531616563653836383862633635623732336666613031 Dec 16 04:10:17.471000 audit: BPF prog-id=95 op=LOAD Dec 16 04:10:17.471000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2653 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335386637643531616563653836383862633635623732336666613031 Dec 16 04:10:17.486000 audit: BPF prog-id=96 op=LOAD Dec 16 04:10:17.487000 audit: BPF prog-id=97 op=LOAD Dec 16 04:10:17.487000 audit[2691]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2660 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338393837366562353138366432383732636234383363306638303163 Dec 16 04:10:17.487000 audit: BPF prog-id=97 op=UNLOAD Dec 16 04:10:17.487000 audit[2691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2660 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338393837366562353138366432383732636234383363306638303163 Dec 16 04:10:17.488000 audit: BPF prog-id=98 op=LOAD Dec 16 04:10:17.488000 audit[2691]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2660 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338393837366562353138366432383732636234383363306638303163 Dec 16 04:10:17.488000 audit: BPF prog-id=99 op=LOAD Dec 16 04:10:17.488000 audit[2691]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2660 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338393837366562353138366432383732636234383363306638303163 Dec 16 04:10:17.489000 audit: BPF prog-id=99 op=UNLOAD Dec 16 04:10:17.489000 audit[2691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2660 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338393837366562353138366432383732636234383363306638303163 Dec 16 04:10:17.489000 audit: BPF prog-id=98 op=UNLOAD Dec 16 04:10:17.489000 audit[2691]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2660 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338393837366562353138366432383732636234383363306638303163 Dec 16 04:10:17.489000 audit: BPF prog-id=100 op=LOAD Dec 16 04:10:17.489000 audit[2691]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2660 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338393837366562353138366432383732636234383363306638303163 Dec 16 04:10:17.595291 containerd[1631]: time="2025-12-16T04:10:17.595084569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-cuii1.gb1.brightbox.com,Uid:80428432a3aac2ddd5e9be9dae72a577,Namespace:kube-system,Attempt:0,} returns sandbox id \"e33c619f9f6248e5c844fa16b1d09dfa5373799a4a27b981717f66b5c9719096\"" Dec 16 04:10:17.601914 containerd[1631]: time="2025-12-16T04:10:17.601870294Z" level=info msg="CreateContainer within sandbox \"e33c619f9f6248e5c844fa16b1d09dfa5373799a4a27b981717f66b5c9719096\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 04:10:17.609602 containerd[1631]: time="2025-12-16T04:10:17.609551661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-cuii1.gb1.brightbox.com,Uid:1089326a866948ecac28069a5c5b429d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c58f7d51aece8688bc65b723ffa01211059320670c8560d00cc36657e7dac292\"" Dec 16 04:10:17.615189 containerd[1631]: time="2025-12-16T04:10:17.615151493Z" level=info msg="CreateContainer within sandbox \"c58f7d51aece8688bc65b723ffa01211059320670c8560d00cc36657e7dac292\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 04:10:17.636534 containerd[1631]: time="2025-12-16T04:10:17.636483155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-cuii1.gb1.brightbox.com,Uid:403bf233a26370d3019d5ea568cafe20,Namespace:kube-system,Attempt:0,} returns sandbox id \"389876eb5186d2872cb483c0f801c2bdc20fc8495d9dd26255f5467c342daa74\"" Dec 16 04:10:17.641293 containerd[1631]: time="2025-12-16T04:10:17.641222605Z" level=info msg="CreateContainer within sandbox \"389876eb5186d2872cb483c0f801c2bdc20fc8495d9dd26255f5467c342daa74\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 04:10:17.643534 containerd[1631]: time="2025-12-16T04:10:17.643501617Z" level=info msg="Container a248518ed32a28323c752d8dc08a8b6a805f707b798bf8aec9f2d816ed138faa: CDI devices from CRI Config.CDIDevices: []" Dec 16 04:10:17.645022 containerd[1631]: time="2025-12-16T04:10:17.644948752Z" level=info msg="Container 8fd5fb3f1a21d96b45b3f3ecc57ced54b9a8cb7027da7355bfff4404f6b367e8: CDI devices from CRI Config.CDIDevices: []" Dec 16 04:10:17.659696 containerd[1631]: time="2025-12-16T04:10:17.659609744Z" level=info msg="Container 6b658d8e978dea69fc00ce586309c0aad5b4070b39606d741a448104cb9cdb46: CDI devices from CRI Config.CDIDevices: []" Dec 16 04:10:17.669538 containerd[1631]: time="2025-12-16T04:10:17.669409461Z" level=info msg="CreateContainer within sandbox \"c58f7d51aece8688bc65b723ffa01211059320670c8560d00cc36657e7dac292\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8fd5fb3f1a21d96b45b3f3ecc57ced54b9a8cb7027da7355bfff4404f6b367e8\"" Dec 16 04:10:17.669929 containerd[1631]: time="2025-12-16T04:10:17.669897033Z" level=info msg="CreateContainer within sandbox \"e33c619f9f6248e5c844fa16b1d09dfa5373799a4a27b981717f66b5c9719096\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a248518ed32a28323c752d8dc08a8b6a805f707b798bf8aec9f2d816ed138faa\"" Dec 16 04:10:17.673480 containerd[1631]: time="2025-12-16T04:10:17.673417653Z" level=info msg="StartContainer for \"a248518ed32a28323c752d8dc08a8b6a805f707b798bf8aec9f2d816ed138faa\"" Dec 16 04:10:17.675207 containerd[1631]: time="2025-12-16T04:10:17.675086593Z" level=info msg="CreateContainer within sandbox \"389876eb5186d2872cb483c0f801c2bdc20fc8495d9dd26255f5467c342daa74\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6b658d8e978dea69fc00ce586309c0aad5b4070b39606d741a448104cb9cdb46\"" Dec 16 04:10:17.676365 containerd[1631]: time="2025-12-16T04:10:17.676336364Z" level=info msg="StartContainer for \"8fd5fb3f1a21d96b45b3f3ecc57ced54b9a8cb7027da7355bfff4404f6b367e8\"" Dec 16 04:10:17.677795 containerd[1631]: time="2025-12-16T04:10:17.677694275Z" level=info msg="connecting to shim a248518ed32a28323c752d8dc08a8b6a805f707b798bf8aec9f2d816ed138faa" address="unix:///run/containerd/s/781fedde0b3d9871128478d9f9e599ed7d9cdc9c05745618a6cb5689e25fd76b" protocol=ttrpc version=3 Dec 16 04:10:17.678482 containerd[1631]: time="2025-12-16T04:10:17.678443444Z" level=info msg="connecting to shim 8fd5fb3f1a21d96b45b3f3ecc57ced54b9a8cb7027da7355bfff4404f6b367e8" address="unix:///run/containerd/s/e2c2af4b61407c173deb3b5b831cbcea6208ff95c57546d71766f5dc5082219d" protocol=ttrpc version=3 Dec 16 04:10:17.680248 containerd[1631]: time="2025-12-16T04:10:17.680031341Z" level=info msg="StartContainer for \"6b658d8e978dea69fc00ce586309c0aad5b4070b39606d741a448104cb9cdb46\"" Dec 16 04:10:17.686191 containerd[1631]: time="2025-12-16T04:10:17.685968079Z" level=info msg="connecting to shim 6b658d8e978dea69fc00ce586309c0aad5b4070b39606d741a448104cb9cdb46" address="unix:///run/containerd/s/61bf8e2a49753bb4558eff78b3548d2cc7eaf0649161d048f02ee1f607c38952" protocol=ttrpc version=3 Dec 16 04:10:17.724628 systemd[1]: Started cri-containerd-8fd5fb3f1a21d96b45b3f3ecc57ced54b9a8cb7027da7355bfff4404f6b367e8.scope - libcontainer container 8fd5fb3f1a21d96b45b3f3ecc57ced54b9a8cb7027da7355bfff4404f6b367e8. Dec 16 04:10:17.727952 systemd[1]: Started cri-containerd-a248518ed32a28323c752d8dc08a8b6a805f707b798bf8aec9f2d816ed138faa.scope - libcontainer container a248518ed32a28323c752d8dc08a8b6a805f707b798bf8aec9f2d816ed138faa. Dec 16 04:10:17.741607 systemd[1]: Started cri-containerd-6b658d8e978dea69fc00ce586309c0aad5b4070b39606d741a448104cb9cdb46.scope - libcontainer container 6b658d8e978dea69fc00ce586309c0aad5b4070b39606d741a448104cb9cdb46. Dec 16 04:10:17.760000 audit: BPF prog-id=101 op=LOAD Dec 16 04:10:17.762193 kubelet[2596]: W1216 04:10:17.761985 2596 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.69.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-cuii1.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.69.46:6443: connect: connection refused Dec 16 04:10:17.762413 kubelet[2596]: E1216 04:10:17.762355 2596 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.69.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-cuii1.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.69.46:6443: connect: connection refused" logger="UnhandledError" Dec 16 04:10:17.761000 audit: BPF prog-id=102 op=LOAD Dec 16 04:10:17.761000 audit[2770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2653 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866643566623366316132316439366234356233663365636335376365 Dec 16 04:10:17.761000 audit: BPF prog-id=102 op=UNLOAD Dec 16 04:10:17.761000 audit[2770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2653 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866643566623366316132316439366234356233663365636335376365 Dec 16 04:10:17.762000 audit: BPF prog-id=103 op=LOAD Dec 16 04:10:17.762000 audit[2770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2653 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866643566623366316132316439366234356233663365636335376365 Dec 16 04:10:17.762000 audit: BPF prog-id=104 op=LOAD Dec 16 04:10:17.762000 audit[2770]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2653 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866643566623366316132316439366234356233663365636335376365 Dec 16 04:10:17.762000 audit: BPF prog-id=104 op=UNLOAD Dec 16 04:10:17.762000 audit[2770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2653 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866643566623366316132316439366234356233663365636335376365 Dec 16 04:10:17.762000 audit: BPF prog-id=103 op=UNLOAD Dec 16 04:10:17.762000 audit[2770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2653 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866643566623366316132316439366234356233663365636335376365 Dec 16 04:10:17.762000 audit: BPF prog-id=105 op=LOAD Dec 16 04:10:17.762000 audit[2770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2653 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866643566623366316132316439366234356233663365636335376365 Dec 16 04:10:17.776000 audit: BPF prog-id=106 op=LOAD Dec 16 04:10:17.777000 audit: BPF prog-id=107 op=LOAD Dec 16 04:10:17.777000 audit[2769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2652 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132343835313865643332613238333233633735326438646330386138 Dec 16 04:10:17.777000 audit: BPF prog-id=107 op=UNLOAD Dec 16 04:10:17.777000 audit[2769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132343835313865643332613238333233633735326438646330386138 Dec 16 04:10:17.777000 audit: BPF prog-id=108 op=LOAD Dec 16 04:10:17.777000 audit[2769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2652 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132343835313865643332613238333233633735326438646330386138 Dec 16 04:10:17.777000 audit: BPF prog-id=109 op=LOAD Dec 16 04:10:17.777000 audit[2769]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2652 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132343835313865643332613238333233633735326438646330386138 Dec 16 04:10:17.777000 audit: BPF prog-id=109 op=UNLOAD Dec 16 04:10:17.777000 audit[2769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132343835313865643332613238333233633735326438646330386138 Dec 16 04:10:17.779000 audit: BPF prog-id=108 op=UNLOAD Dec 16 04:10:17.779000 audit[2769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132343835313865643332613238333233633735326438646330386138 Dec 16 04:10:17.779000 audit: BPF prog-id=110 op=LOAD Dec 16 04:10:17.779000 audit[2769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2652 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132343835313865643332613238333233633735326438646330386138 Dec 16 04:10:17.811000 audit: BPF prog-id=111 op=LOAD Dec 16 04:10:17.812000 audit: BPF prog-id=112 op=LOAD Dec 16 04:10:17.812000 audit[2776]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2660 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662363538643865393738646561363966633030636535383633303963 Dec 16 04:10:17.812000 audit: BPF prog-id=112 op=UNLOAD Dec 16 04:10:17.812000 audit[2776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2660 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662363538643865393738646561363966633030636535383633303963 Dec 16 04:10:17.815045 kubelet[2596]: W1216 04:10:17.814232 2596 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.69.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.69.46:6443: connect: connection refused Dec 16 04:10:17.815045 kubelet[2596]: E1216 04:10:17.814323 2596 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.69.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.69.46:6443: connect: connection refused" logger="UnhandledError" Dec 16 04:10:17.813000 audit: BPF prog-id=113 op=LOAD Dec 16 04:10:17.813000 audit[2776]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2660 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662363538643865393738646561363966633030636535383633303963 Dec 16 04:10:17.814000 audit: BPF prog-id=114 op=LOAD Dec 16 04:10:17.814000 audit[2776]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2660 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662363538643865393738646561363966633030636535383633303963 Dec 16 04:10:17.815000 audit: BPF prog-id=114 op=UNLOAD Dec 16 04:10:17.815000 audit[2776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2660 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662363538643865393738646561363966633030636535383633303963 Dec 16 04:10:17.815000 audit: BPF prog-id=113 op=UNLOAD Dec 16 04:10:17.815000 audit[2776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2660 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662363538643865393738646561363966633030636535383633303963 Dec 16 04:10:17.815000 audit: BPF prog-id=115 op=LOAD Dec 16 04:10:17.815000 audit[2776]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2660 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:17.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662363538643865393738646561363966633030636535383633303963 Dec 16 04:10:17.870896 containerd[1631]: time="2025-12-16T04:10:17.870711347Z" level=info msg="StartContainer for \"8fd5fb3f1a21d96b45b3f3ecc57ced54b9a8cb7027da7355bfff4404f6b367e8\" returns successfully" Dec 16 04:10:17.888657 containerd[1631]: time="2025-12-16T04:10:17.888535529Z" level=info msg="StartContainer for \"a248518ed32a28323c752d8dc08a8b6a805f707b798bf8aec9f2d816ed138faa\" returns successfully" Dec 16 04:10:17.910984 containerd[1631]: time="2025-12-16T04:10:17.910619751Z" level=info msg="StartContainer for \"6b658d8e978dea69fc00ce586309c0aad5b4070b39606d741a448104cb9cdb46\" returns successfully" Dec 16 04:10:17.927448 kubelet[2596]: W1216 04:10:17.927243 2596 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.69.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.69.46:6443: connect: connection refused Dec 16 04:10:17.928138 kubelet[2596]: E1216 04:10:17.928070 2596 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.69.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.69.46:6443: connect: connection refused" logger="UnhandledError" Dec 16 04:10:17.941212 kubelet[2596]: W1216 04:10:17.940562 2596 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.69.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.69.46:6443: connect: connection refused Dec 16 04:10:17.941212 kubelet[2596]: E1216 04:10:17.940656 2596 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.69.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.69.46:6443: connect: connection refused" logger="UnhandledError" Dec 16 04:10:18.010657 kubelet[2596]: E1216 04:10:18.010594 2596 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.69.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-cuii1.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.69.46:6443: connect: connection refused" interval="1.6s" Dec 16 04:10:18.257235 kubelet[2596]: I1216 04:10:18.257110 2596 kubelet_node_status.go:75] "Attempting to register node" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:18.258545 kubelet[2596]: E1216 04:10:18.258510 2596 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.69.46:6443/api/v1/nodes\": dial tcp 10.230.69.46:6443: connect: connection refused" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:18.723218 kubelet[2596]: E1216 04:10:18.723168 2596 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cuii1.gb1.brightbox.com\" not found" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:18.725170 kubelet[2596]: E1216 04:10:18.725139 2596 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cuii1.gb1.brightbox.com\" not found" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:18.730395 kubelet[2596]: E1216 04:10:18.727548 2596 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cuii1.gb1.brightbox.com\" not found" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:19.727267 kubelet[2596]: E1216 04:10:19.727226 2596 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cuii1.gb1.brightbox.com\" not found" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:19.729454 kubelet[2596]: E1216 04:10:19.729426 2596 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cuii1.gb1.brightbox.com\" not found" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:19.729913 kubelet[2596]: E1216 04:10:19.729885 2596 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cuii1.gb1.brightbox.com\" not found" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:19.861315 kubelet[2596]: I1216 04:10:19.861232 2596 kubelet_node_status.go:75] "Attempting to register node" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:20.729568 kubelet[2596]: E1216 04:10:20.729529 2596 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cuii1.gb1.brightbox.com\" not found" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:20.730398 kubelet[2596]: E1216 04:10:20.730120 2596 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cuii1.gb1.brightbox.com\" not found" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:22.087319 kubelet[2596]: E1216 04:10:22.087268 2596 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-cuii1.gb1.brightbox.com\" not found" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:22.205481 kubelet[2596]: I1216 04:10:22.204962 2596 kubelet_node_status.go:78] "Successfully registered node" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:22.206686 kubelet[2596]: I1216 04:10:22.205811 2596 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:22.206828 kubelet[2596]: E1216 04:10:22.205012 2596 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-cuii1.gb1.brightbox.com\": node \"srv-cuii1.gb1.brightbox.com\" not found" Dec 16 04:10:22.280474 kubelet[2596]: E1216 04:10:22.280412 2596 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-cuii1.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:22.280474 kubelet[2596]: I1216 04:10:22.280466 2596 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:22.283445 kubelet[2596]: E1216 04:10:22.283414 2596 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-cuii1.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:22.283751 kubelet[2596]: I1216 04:10:22.283545 2596 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:22.287076 kubelet[2596]: E1216 04:10:22.287030 2596 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-cuii1.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:22.578533 kubelet[2596]: I1216 04:10:22.578486 2596 apiserver.go:52] "Watching apiserver" Dec 16 04:10:22.604787 kubelet[2596]: I1216 04:10:22.604725 2596 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 04:10:23.974474 kubelet[2596]: I1216 04:10:23.974383 2596 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:23.983771 kubelet[2596]: W1216 04:10:23.983721 2596 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 04:10:24.283851 systemd[1]: Reload requested from client PID 2870 ('systemctl') (unit session-12.scope)... Dec 16 04:10:24.284399 systemd[1]: Reloading... Dec 16 04:10:24.432457 zram_generator::config[2918]: No configuration found. Dec 16 04:10:24.825204 systemd[1]: Reloading finished in 540 ms. Dec 16 04:10:24.877655 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 04:10:24.891207 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 04:10:24.891724 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 04:10:24.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:24.898204 kernel: kauditd_printk_skb: 205 callbacks suppressed Dec 16 04:10:24.898290 kernel: audit: type=1131 audit(1765858224.891:401): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:24.902777 systemd[1]: kubelet.service: Consumed 1.358s CPU time, 129.9M memory peak. Dec 16 04:10:24.907017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 04:10:24.907000 audit: BPF prog-id=116 op=LOAD Dec 16 04:10:24.910441 kernel: audit: type=1334 audit(1765858224.907:402): prog-id=116 op=LOAD Dec 16 04:10:24.907000 audit: BPF prog-id=80 op=UNLOAD Dec 16 04:10:24.913433 kernel: audit: type=1334 audit(1765858224.907:403): prog-id=80 op=UNLOAD Dec 16 04:10:24.908000 audit: BPF prog-id=117 op=LOAD Dec 16 04:10:24.916437 kernel: audit: type=1334 audit(1765858224.908:404): prog-id=117 op=LOAD Dec 16 04:10:24.916550 kernel: audit: type=1334 audit(1765858224.908:405): prog-id=65 op=UNLOAD Dec 16 04:10:24.908000 audit: BPF prog-id=65 op=UNLOAD Dec 16 04:10:24.908000 audit: BPF prog-id=118 op=LOAD Dec 16 04:10:24.921403 kernel: audit: type=1334 audit(1765858224.908:406): prog-id=118 op=LOAD Dec 16 04:10:24.909000 audit: BPF prog-id=119 op=LOAD Dec 16 04:10:24.924418 kernel: audit: type=1334 audit(1765858224.909:407): prog-id=119 op=LOAD Dec 16 04:10:24.909000 audit: BPF prog-id=84 op=UNLOAD Dec 16 04:10:24.927435 kernel: audit: type=1334 audit(1765858224.909:408): prog-id=84 op=UNLOAD Dec 16 04:10:24.909000 audit: BPF prog-id=85 op=UNLOAD Dec 16 04:10:24.912000 audit: BPF prog-id=120 op=LOAD Dec 16 04:10:24.929881 kernel: audit: type=1334 audit(1765858224.909:409): prog-id=85 op=UNLOAD Dec 16 04:10:24.929953 kernel: audit: type=1334 audit(1765858224.912:410): prog-id=120 op=LOAD Dec 16 04:10:24.912000 audit: BPF prog-id=72 op=UNLOAD Dec 16 04:10:24.912000 audit: BPF prog-id=121 op=LOAD Dec 16 04:10:24.912000 audit: BPF prog-id=122 op=LOAD Dec 16 04:10:24.912000 audit: BPF prog-id=73 op=UNLOAD Dec 16 04:10:24.912000 audit: BPF prog-id=74 op=UNLOAD Dec 16 04:10:24.914000 audit: BPF prog-id=123 op=LOAD Dec 16 04:10:24.914000 audit: BPF prog-id=69 op=UNLOAD Dec 16 04:10:24.914000 audit: BPF prog-id=124 op=LOAD Dec 16 04:10:24.914000 audit: BPF prog-id=125 op=LOAD Dec 16 04:10:24.914000 audit: BPF prog-id=70 op=UNLOAD Dec 16 04:10:24.914000 audit: BPF prog-id=71 op=UNLOAD Dec 16 04:10:24.916000 audit: BPF prog-id=126 op=LOAD Dec 16 04:10:24.916000 audit: BPF prog-id=76 op=UNLOAD Dec 16 04:10:24.917000 audit: BPF prog-id=127 op=LOAD Dec 16 04:10:24.917000 audit: BPF prog-id=128 op=LOAD Dec 16 04:10:24.917000 audit: BPF prog-id=77 op=UNLOAD Dec 16 04:10:24.917000 audit: BPF prog-id=78 op=UNLOAD Dec 16 04:10:24.920000 audit: BPF prog-id=129 op=LOAD Dec 16 04:10:24.920000 audit: BPF prog-id=79 op=UNLOAD Dec 16 04:10:24.921000 audit: BPF prog-id=130 op=LOAD Dec 16 04:10:24.921000 audit: BPF prog-id=75 op=UNLOAD Dec 16 04:10:24.924000 audit: BPF prog-id=131 op=LOAD Dec 16 04:10:24.924000 audit: BPF prog-id=81 op=UNLOAD Dec 16 04:10:24.924000 audit: BPF prog-id=132 op=LOAD Dec 16 04:10:24.924000 audit: BPF prog-id=133 op=LOAD Dec 16 04:10:24.924000 audit: BPF prog-id=82 op=UNLOAD Dec 16 04:10:24.924000 audit: BPF prog-id=83 op=UNLOAD Dec 16 04:10:24.925000 audit: BPF prog-id=134 op=LOAD Dec 16 04:10:24.925000 audit: BPF prog-id=66 op=UNLOAD Dec 16 04:10:24.925000 audit: BPF prog-id=135 op=LOAD Dec 16 04:10:24.925000 audit: BPF prog-id=136 op=LOAD Dec 16 04:10:24.925000 audit: BPF prog-id=67 op=UNLOAD Dec 16 04:10:24.925000 audit: BPF prog-id=68 op=UNLOAD Dec 16 04:10:25.258189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 04:10:25.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:25.275896 (kubelet)[2982]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 04:10:25.380344 kubelet[2982]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 04:10:25.380344 kubelet[2982]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 04:10:25.380344 kubelet[2982]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 04:10:25.380344 kubelet[2982]: I1216 04:10:25.380161 2982 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 04:10:25.400140 kubelet[2982]: I1216 04:10:25.399303 2982 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 04:10:25.400140 kubelet[2982]: I1216 04:10:25.399399 2982 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 04:10:25.400140 kubelet[2982]: I1216 04:10:25.399766 2982 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 04:10:25.403207 kubelet[2982]: I1216 04:10:25.403066 2982 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 04:10:25.410818 kubelet[2982]: I1216 04:10:25.409933 2982 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 04:10:25.422247 kubelet[2982]: I1216 04:10:25.422220 2982 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 04:10:25.429257 kubelet[2982]: I1216 04:10:25.429231 2982 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 04:10:25.437513 kubelet[2982]: I1216 04:10:25.437215 2982 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 04:10:25.438455 kubelet[2982]: I1216 04:10:25.438134 2982 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-cuii1.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 04:10:25.438585 kubelet[2982]: I1216 04:10:25.438487 2982 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 04:10:25.438585 kubelet[2982]: I1216 04:10:25.438505 2982 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 04:10:25.441801 kubelet[2982]: I1216 04:10:25.441702 2982 state_mem.go:36] "Initialized new in-memory state store" Dec 16 04:10:25.442394 kubelet[2982]: I1216 04:10:25.442157 2982 kubelet.go:446] "Attempting to sync node with API server" Dec 16 04:10:25.442394 kubelet[2982]: I1216 04:10:25.442205 2982 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 04:10:25.442394 kubelet[2982]: I1216 04:10:25.442260 2982 kubelet.go:352] "Adding apiserver pod source" Dec 16 04:10:25.442394 kubelet[2982]: I1216 04:10:25.442295 2982 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 04:10:25.448990 kubelet[2982]: I1216 04:10:25.448949 2982 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 04:10:25.451227 kubelet[2982]: I1216 04:10:25.450782 2982 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 04:10:25.459305 kubelet[2982]: I1216 04:10:25.458444 2982 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 04:10:25.459305 kubelet[2982]: I1216 04:10:25.458512 2982 server.go:1287] "Started kubelet" Dec 16 04:10:25.468905 kubelet[2982]: I1216 04:10:25.468704 2982 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 04:10:25.476900 kubelet[2982]: I1216 04:10:25.476842 2982 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 04:10:25.478471 kubelet[2982]: I1216 04:10:25.478443 2982 server.go:479] "Adding debug handlers to kubelet server" Dec 16 04:10:25.480064 kubelet[2982]: I1216 04:10:25.479931 2982 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 04:10:25.480765 kubelet[2982]: I1216 04:10:25.480277 2982 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 04:10:25.480765 kubelet[2982]: I1216 04:10:25.480589 2982 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 04:10:25.486169 kubelet[2982]: I1216 04:10:25.484097 2982 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 04:10:25.487084 kubelet[2982]: I1216 04:10:25.486798 2982 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 04:10:25.487084 kubelet[2982]: I1216 04:10:25.487040 2982 reconciler.go:26] "Reconciler: start to sync state" Dec 16 04:10:25.491981 kubelet[2982]: I1216 04:10:25.489603 2982 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 04:10:25.491981 kubelet[2982]: I1216 04:10:25.491118 2982 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 04:10:25.491981 kubelet[2982]: I1216 04:10:25.491166 2982 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 04:10:25.491981 kubelet[2982]: I1216 04:10:25.491205 2982 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 04:10:25.491981 kubelet[2982]: I1216 04:10:25.491221 2982 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 04:10:25.491981 kubelet[2982]: E1216 04:10:25.491306 2982 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 04:10:25.512045 kubelet[2982]: I1216 04:10:25.511822 2982 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 04:10:25.531172 kubelet[2982]: I1216 04:10:25.530358 2982 factory.go:221] Registration of the containerd container factory successfully Dec 16 04:10:25.531172 kubelet[2982]: I1216 04:10:25.530398 2982 factory.go:221] Registration of the systemd container factory successfully Dec 16 04:10:25.591831 kubelet[2982]: E1216 04:10:25.591782 2982 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 04:10:25.626921 kubelet[2982]: I1216 04:10:25.626693 2982 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 04:10:25.626921 kubelet[2982]: I1216 04:10:25.626720 2982 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 04:10:25.626921 kubelet[2982]: I1216 04:10:25.626849 2982 state_mem.go:36] "Initialized new in-memory state store" Dec 16 04:10:25.627157 kubelet[2982]: I1216 04:10:25.627118 2982 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 04:10:25.627157 kubelet[2982]: I1216 04:10:25.627137 2982 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 04:10:25.627243 kubelet[2982]: I1216 04:10:25.627172 2982 policy_none.go:49] "None policy: Start" Dec 16 04:10:25.627243 kubelet[2982]: I1216 04:10:25.627200 2982 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 04:10:25.627243 kubelet[2982]: I1216 04:10:25.627229 2982 state_mem.go:35] "Initializing new in-memory state store" Dec 16 04:10:25.627557 kubelet[2982]: I1216 04:10:25.627457 2982 state_mem.go:75] "Updated machine memory state" Dec 16 04:10:25.641447 kubelet[2982]: I1216 04:10:25.641412 2982 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 04:10:25.641799 kubelet[2982]: I1216 04:10:25.641762 2982 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 04:10:25.641887 kubelet[2982]: I1216 04:10:25.641803 2982 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 04:10:25.645042 kubelet[2982]: I1216 04:10:25.644891 2982 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 04:10:25.653499 kubelet[2982]: E1216 04:10:25.652939 2982 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 04:10:25.771707 kubelet[2982]: I1216 04:10:25.770760 2982 kubelet_node_status.go:75] "Attempting to register node" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:25.780984 kubelet[2982]: I1216 04:10:25.780927 2982 kubelet_node_status.go:124] "Node was previously registered" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:25.781322 kubelet[2982]: I1216 04:10:25.781061 2982 kubelet_node_status.go:78] "Successfully registered node" node="srv-cuii1.gb1.brightbox.com" Dec 16 04:10:25.796161 kubelet[2982]: I1216 04:10:25.796102 2982 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:25.803403 kubelet[2982]: W1216 04:10:25.803353 2982 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 04:10:25.806827 kubelet[2982]: I1216 04:10:25.806797 2982 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:25.807885 kubelet[2982]: I1216 04:10:25.807860 2982 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:25.817444 kubelet[2982]: W1216 04:10:25.817413 2982 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 04:10:25.817772 kubelet[2982]: W1216 04:10:25.817749 2982 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 04:10:25.818063 kubelet[2982]: E1216 04:10:25.818008 2982 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-cuii1.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:25.890678 kubelet[2982]: I1216 04:10:25.890010 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80428432a3aac2ddd5e9be9dae72a577-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-cuii1.gb1.brightbox.com\" (UID: \"80428432a3aac2ddd5e9be9dae72a577\") " pod="kube-system/kube-controller-manager-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:25.890678 kubelet[2982]: I1216 04:10:25.890069 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80428432a3aac2ddd5e9be9dae72a577-ca-certs\") pod \"kube-controller-manager-srv-cuii1.gb1.brightbox.com\" (UID: \"80428432a3aac2ddd5e9be9dae72a577\") " pod="kube-system/kube-controller-manager-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:25.890678 kubelet[2982]: I1216 04:10:25.890104 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80428432a3aac2ddd5e9be9dae72a577-flexvolume-dir\") pod \"kube-controller-manager-srv-cuii1.gb1.brightbox.com\" (UID: \"80428432a3aac2ddd5e9be9dae72a577\") " pod="kube-system/kube-controller-manager-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:25.890678 kubelet[2982]: I1216 04:10:25.890130 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80428432a3aac2ddd5e9be9dae72a577-kubeconfig\") pod \"kube-controller-manager-srv-cuii1.gb1.brightbox.com\" (UID: \"80428432a3aac2ddd5e9be9dae72a577\") " pod="kube-system/kube-controller-manager-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:25.891014 kubelet[2982]: I1216 04:10:25.890159 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1089326a866948ecac28069a5c5b429d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-cuii1.gb1.brightbox.com\" (UID: \"1089326a866948ecac28069a5c5b429d\") " pod="kube-system/kube-apiserver-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:25.891014 kubelet[2982]: I1216 04:10:25.890183 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80428432a3aac2ddd5e9be9dae72a577-k8s-certs\") pod \"kube-controller-manager-srv-cuii1.gb1.brightbox.com\" (UID: \"80428432a3aac2ddd5e9be9dae72a577\") " pod="kube-system/kube-controller-manager-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:25.891014 kubelet[2982]: I1216 04:10:25.890210 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/403bf233a26370d3019d5ea568cafe20-kubeconfig\") pod \"kube-scheduler-srv-cuii1.gb1.brightbox.com\" (UID: \"403bf233a26370d3019d5ea568cafe20\") " pod="kube-system/kube-scheduler-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:25.891014 kubelet[2982]: I1216 04:10:25.890237 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1089326a866948ecac28069a5c5b429d-ca-certs\") pod \"kube-apiserver-srv-cuii1.gb1.brightbox.com\" (UID: \"1089326a866948ecac28069a5c5b429d\") " pod="kube-system/kube-apiserver-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:25.891014 kubelet[2982]: I1216 04:10:25.890264 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1089326a866948ecac28069a5c5b429d-k8s-certs\") pod \"kube-apiserver-srv-cuii1.gb1.brightbox.com\" (UID: \"1089326a866948ecac28069a5c5b429d\") " pod="kube-system/kube-apiserver-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:26.445587 kubelet[2982]: I1216 04:10:26.445524 2982 apiserver.go:52] "Watching apiserver" Dec 16 04:10:26.487128 kubelet[2982]: I1216 04:10:26.487053 2982 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 04:10:26.586826 kubelet[2982]: I1216 04:10:26.586258 2982 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:26.598116 kubelet[2982]: W1216 04:10:26.598078 2982 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 04:10:26.598608 kubelet[2982]: E1216 04:10:26.598350 2982 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-cuii1.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-cuii1.gb1.brightbox.com" Dec 16 04:10:26.610985 kubelet[2982]: I1216 04:10:26.610859 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-cuii1.gb1.brightbox.com" podStartSLOduration=1.6108067510000001 podStartE2EDuration="1.610806751s" podCreationTimestamp="2025-12-16 04:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 04:10:26.579341939 +0000 UTC m=+1.291084571" watchObservedRunningTime="2025-12-16 04:10:26.610806751 +0000 UTC m=+1.322549358" Dec 16 04:10:26.611457 kubelet[2982]: I1216 04:10:26.611418 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-cuii1.gb1.brightbox.com" podStartSLOduration=1.611407221 podStartE2EDuration="1.611407221s" podCreationTimestamp="2025-12-16 04:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 04:10:26.608872037 +0000 UTC m=+1.320614683" watchObservedRunningTime="2025-12-16 04:10:26.611407221 +0000 UTC m=+1.323149843" Dec 16 04:10:26.645671 kubelet[2982]: I1216 04:10:26.645415 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-cuii1.gb1.brightbox.com" podStartSLOduration=3.645375412 podStartE2EDuration="3.645375412s" podCreationTimestamp="2025-12-16 04:10:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 04:10:26.623800947 +0000 UTC m=+1.335543581" watchObservedRunningTime="2025-12-16 04:10:26.645375412 +0000 UTC m=+1.357118044" Dec 16 04:10:31.035178 kubelet[2982]: I1216 04:10:31.035131 2982 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 04:10:31.036853 containerd[1631]: time="2025-12-16T04:10:31.036726646Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 04:10:31.038862 kubelet[2982]: I1216 04:10:31.037442 2982 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 04:10:32.010080 systemd[1]: Created slice kubepods-besteffort-pod0a1d4efd_2e3f_4148_9a49_b2c00a3b69e0.slice - libcontainer container kubepods-besteffort-pod0a1d4efd_2e3f_4148_9a49_b2c00a3b69e0.slice. Dec 16 04:10:32.030647 kubelet[2982]: I1216 04:10:32.030587 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a1d4efd-2e3f-4148-9a49-b2c00a3b69e0-xtables-lock\") pod \"kube-proxy-zv2c9\" (UID: \"0a1d4efd-2e3f-4148-9a49-b2c00a3b69e0\") " pod="kube-system/kube-proxy-zv2c9" Dec 16 04:10:32.030838 kubelet[2982]: I1216 04:10:32.030647 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w46nt\" (UniqueName: \"kubernetes.io/projected/0a1d4efd-2e3f-4148-9a49-b2c00a3b69e0-kube-api-access-w46nt\") pod \"kube-proxy-zv2c9\" (UID: \"0a1d4efd-2e3f-4148-9a49-b2c00a3b69e0\") " pod="kube-system/kube-proxy-zv2c9" Dec 16 04:10:32.030838 kubelet[2982]: I1216 04:10:32.030706 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0a1d4efd-2e3f-4148-9a49-b2c00a3b69e0-kube-proxy\") pod \"kube-proxy-zv2c9\" (UID: \"0a1d4efd-2e3f-4148-9a49-b2c00a3b69e0\") " pod="kube-system/kube-proxy-zv2c9" Dec 16 04:10:32.030838 kubelet[2982]: I1216 04:10:32.030736 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a1d4efd-2e3f-4148-9a49-b2c00a3b69e0-lib-modules\") pod \"kube-proxy-zv2c9\" (UID: \"0a1d4efd-2e3f-4148-9a49-b2c00a3b69e0\") " pod="kube-system/kube-proxy-zv2c9" Dec 16 04:10:32.203680 systemd[1]: Created slice kubepods-besteffort-pod77b1fda9_ab24_443c_979c_e061fb889f22.slice - libcontainer container kubepods-besteffort-pod77b1fda9_ab24_443c_979c_e061fb889f22.slice. Dec 16 04:10:32.233003 kubelet[2982]: I1216 04:10:32.232910 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r26tp\" (UniqueName: \"kubernetes.io/projected/77b1fda9-ab24-443c-979c-e061fb889f22-kube-api-access-r26tp\") pod \"tigera-operator-7dcd859c48-dx96r\" (UID: \"77b1fda9-ab24-443c-979c-e061fb889f22\") " pod="tigera-operator/tigera-operator-7dcd859c48-dx96r" Dec 16 04:10:32.233003 kubelet[2982]: I1216 04:10:32.233000 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/77b1fda9-ab24-443c-979c-e061fb889f22-var-lib-calico\") pod \"tigera-operator-7dcd859c48-dx96r\" (UID: \"77b1fda9-ab24-443c-979c-e061fb889f22\") " pod="tigera-operator/tigera-operator-7dcd859c48-dx96r" Dec 16 04:10:32.325804 containerd[1631]: time="2025-12-16T04:10:32.325320210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zv2c9,Uid:0a1d4efd-2e3f-4148-9a49-b2c00a3b69e0,Namespace:kube-system,Attempt:0,}" Dec 16 04:10:32.358140 containerd[1631]: time="2025-12-16T04:10:32.357903039Z" level=info msg="connecting to shim 65afdf2f2ea9cbbc13f39b4bf2ebc9e449b41c07983301456709b4b2e9d2c02f" address="unix:///run/containerd/s/fd28e7485ef0030b22c3de1db35a1f5a0139eab2658bf1bbc86970b2b48c1b34" namespace=k8s.io protocol=ttrpc version=3 Dec 16 04:10:32.403843 systemd[1]: Started cri-containerd-65afdf2f2ea9cbbc13f39b4bf2ebc9e449b41c07983301456709b4b2e9d2c02f.scope - libcontainer container 65afdf2f2ea9cbbc13f39b4bf2ebc9e449b41c07983301456709b4b2e9d2c02f. Dec 16 04:10:32.430000 audit: BPF prog-id=137 op=LOAD Dec 16 04:10:32.438938 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 16 04:10:32.439093 kernel: audit: type=1334 audit(1765858232.430:445): prog-id=137 op=LOAD Dec 16 04:10:32.440000 audit: BPF prog-id=138 op=LOAD Dec 16 04:10:32.447965 kernel: audit: type=1334 audit(1765858232.440:446): prog-id=138 op=LOAD Dec 16 04:10:32.448101 kernel: audit: type=1300 audit(1765858232.440:446): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3038 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.440000 audit[3049]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3038 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635616664663266326561396362626331336633396234626632656263 Dec 16 04:10:32.453509 kernel: audit: type=1327 audit(1765858232.440:446): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635616664663266326561396362626331336633396234626632656263 Dec 16 04:10:32.453597 kernel: audit: type=1334 audit(1765858232.440:447): prog-id=138 op=UNLOAD Dec 16 04:10:32.440000 audit: BPF prog-id=138 op=UNLOAD Dec 16 04:10:32.440000 audit[3049]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3038 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635616664663266326561396362626331336633396234626632656263 Dec 16 04:10:32.462712 kernel: audit: type=1300 audit(1765858232.440:447): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3038 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.462807 kernel: audit: type=1327 audit(1765858232.440:447): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635616664663266326561396362626331336633396234626632656263 Dec 16 04:10:32.440000 audit: BPF prog-id=139 op=LOAD Dec 16 04:10:32.474194 kernel: audit: type=1334 audit(1765858232.440:448): prog-id=139 op=LOAD Dec 16 04:10:32.474341 kernel: audit: type=1300 audit(1765858232.440:448): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3038 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.440000 audit[3049]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3038 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635616664663266326561396362626331336633396234626632656263 Dec 16 04:10:32.480583 kernel: audit: type=1327 audit(1765858232.440:448): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635616664663266326561396362626331336633396234626632656263 Dec 16 04:10:32.440000 audit: BPF prog-id=140 op=LOAD Dec 16 04:10:32.440000 audit[3049]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3038 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635616664663266326561396362626331336633396234626632656263 Dec 16 04:10:32.440000 audit: BPF prog-id=140 op=UNLOAD Dec 16 04:10:32.440000 audit[3049]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3038 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635616664663266326561396362626331336633396234626632656263 Dec 16 04:10:32.440000 audit: BPF prog-id=139 op=UNLOAD Dec 16 04:10:32.440000 audit[3049]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3038 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635616664663266326561396362626331336633396234626632656263 Dec 16 04:10:32.440000 audit: BPF prog-id=141 op=LOAD Dec 16 04:10:32.440000 audit[3049]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3038 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635616664663266326561396362626331336633396234626632656263 Dec 16 04:10:32.486902 containerd[1631]: time="2025-12-16T04:10:32.486816947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zv2c9,Uid:0a1d4efd-2e3f-4148-9a49-b2c00a3b69e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"65afdf2f2ea9cbbc13f39b4bf2ebc9e449b41c07983301456709b4b2e9d2c02f\"" Dec 16 04:10:32.495461 containerd[1631]: time="2025-12-16T04:10:32.494193409Z" level=info msg="CreateContainer within sandbox \"65afdf2f2ea9cbbc13f39b4bf2ebc9e449b41c07983301456709b4b2e9d2c02f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 04:10:32.511644 containerd[1631]: time="2025-12-16T04:10:32.511528664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dx96r,Uid:77b1fda9-ab24-443c-979c-e061fb889f22,Namespace:tigera-operator,Attempt:0,}" Dec 16 04:10:32.526883 containerd[1631]: time="2025-12-16T04:10:32.526705365Z" level=info msg="Container d877d338c9835aae44b162b9743bda01fdfebac311668ab11fe12c41106eb4da: CDI devices from CRI Config.CDIDevices: []" Dec 16 04:10:32.550404 containerd[1631]: time="2025-12-16T04:10:32.550258866Z" level=info msg="CreateContainer within sandbox \"65afdf2f2ea9cbbc13f39b4bf2ebc9e449b41c07983301456709b4b2e9d2c02f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d877d338c9835aae44b162b9743bda01fdfebac311668ab11fe12c41106eb4da\"" Dec 16 04:10:32.551474 containerd[1631]: time="2025-12-16T04:10:32.551359357Z" level=info msg="StartContainer for \"d877d338c9835aae44b162b9743bda01fdfebac311668ab11fe12c41106eb4da\"" Dec 16 04:10:32.555902 containerd[1631]: time="2025-12-16T04:10:32.555856500Z" level=info msg="connecting to shim 792e7b6a2dcc037a233c6605be21a14883b937b99a17eddc61481b4a56d64965" address="unix:///run/containerd/s/32948dfbb9b620d12af8099b6247c093e6242acccad0dcedf3ddf0c8c59bed58" namespace=k8s.io protocol=ttrpc version=3 Dec 16 04:10:32.557571 containerd[1631]: time="2025-12-16T04:10:32.557525904Z" level=info msg="connecting to shim d877d338c9835aae44b162b9743bda01fdfebac311668ab11fe12c41106eb4da" address="unix:///run/containerd/s/fd28e7485ef0030b22c3de1db35a1f5a0139eab2658bf1bbc86970b2b48c1b34" protocol=ttrpc version=3 Dec 16 04:10:32.596631 systemd[1]: Started cri-containerd-d877d338c9835aae44b162b9743bda01fdfebac311668ab11fe12c41106eb4da.scope - libcontainer container d877d338c9835aae44b162b9743bda01fdfebac311668ab11fe12c41106eb4da. Dec 16 04:10:32.606636 systemd[1]: Started cri-containerd-792e7b6a2dcc037a233c6605be21a14883b937b99a17eddc61481b4a56d64965.scope - libcontainer container 792e7b6a2dcc037a233c6605be21a14883b937b99a17eddc61481b4a56d64965. Dec 16 04:10:32.635000 audit: BPF prog-id=142 op=LOAD Dec 16 04:10:32.636000 audit: BPF prog-id=143 op=LOAD Dec 16 04:10:32.636000 audit[3097]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3083 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739326537623661326463633033376132333363363630356265323161 Dec 16 04:10:32.636000 audit: BPF prog-id=143 op=UNLOAD Dec 16 04:10:32.636000 audit[3097]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3083 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739326537623661326463633033376132333363363630356265323161 Dec 16 04:10:32.637000 audit: BPF prog-id=144 op=LOAD Dec 16 04:10:32.637000 audit[3097]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3083 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739326537623661326463633033376132333363363630356265323161 Dec 16 04:10:32.637000 audit: BPF prog-id=145 op=LOAD Dec 16 04:10:32.637000 audit[3097]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3083 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739326537623661326463633033376132333363363630356265323161 Dec 16 04:10:32.637000 audit: BPF prog-id=145 op=UNLOAD Dec 16 04:10:32.637000 audit[3097]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3083 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739326537623661326463633033376132333363363630356265323161 Dec 16 04:10:32.637000 audit: BPF prog-id=144 op=UNLOAD Dec 16 04:10:32.637000 audit[3097]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3083 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739326537623661326463633033376132333363363630356265323161 Dec 16 04:10:32.637000 audit: BPF prog-id=146 op=LOAD Dec 16 04:10:32.637000 audit[3097]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3083 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739326537623661326463633033376132333363363630356265323161 Dec 16 04:10:32.683000 audit: BPF prog-id=147 op=LOAD Dec 16 04:10:32.683000 audit[3094]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3038 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438373764333338633938333561616534346231363262393734336264 Dec 16 04:10:32.684000 audit: BPF prog-id=148 op=LOAD Dec 16 04:10:32.684000 audit[3094]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3038 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438373764333338633938333561616534346231363262393734336264 Dec 16 04:10:32.687000 audit: BPF prog-id=148 op=UNLOAD Dec 16 04:10:32.687000 audit[3094]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3038 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438373764333338633938333561616534346231363262393734336264 Dec 16 04:10:32.687000 audit: BPF prog-id=147 op=UNLOAD Dec 16 04:10:32.687000 audit[3094]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3038 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438373764333338633938333561616534346231363262393734336264 Dec 16 04:10:32.687000 audit: BPF prog-id=149 op=LOAD Dec 16 04:10:32.687000 audit[3094]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3038 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:32.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438373764333338633938333561616534346231363262393734336264 Dec 16 04:10:32.722185 containerd[1631]: time="2025-12-16T04:10:32.722114263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dx96r,Uid:77b1fda9-ab24-443c-979c-e061fb889f22,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"792e7b6a2dcc037a233c6605be21a14883b937b99a17eddc61481b4a56d64965\"" Dec 16 04:10:32.726757 containerd[1631]: time="2025-12-16T04:10:32.726700091Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 04:10:32.742399 containerd[1631]: time="2025-12-16T04:10:32.742344121Z" level=info msg="StartContainer for \"d877d338c9835aae44b162b9743bda01fdfebac311668ab11fe12c41106eb4da\" returns successfully" Dec 16 04:10:33.159844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4160192543.mount: Deactivated successfully. Dec 16 04:10:33.251000 audit[3184]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.251000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc4149eb0 a2=0 a3=7ffcc4149e9c items=0 ppid=3120 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.251000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 04:10:33.253000 audit[3186]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.253000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4b656570 a2=0 a3=7ffd4b65655c items=0 ppid=3120 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.253000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 04:10:33.257000 audit[3187]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.257000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb83f1140 a2=0 a3=7ffdb83f112c items=0 ppid=3120 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.257000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 04:10:33.259000 audit[3188]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.259000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc24bf23d0 a2=0 a3=7ffc24bf23bc items=0 ppid=3120 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 04:10:33.260000 audit[3189]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.260000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea4cfb210 a2=0 a3=7ffea4cfb1fc items=0 ppid=3120 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.260000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 04:10:33.266000 audit[3190]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.266000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca955b200 a2=0 a3=7ffca955b1ec items=0 ppid=3120 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.266000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 04:10:33.367000 audit[3191]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.367000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffca840a3a0 a2=0 a3=7ffca840a38c items=0 ppid=3120 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.367000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 04:10:33.378000 audit[3193]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.378000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd9ee0b430 a2=0 a3=7ffd9ee0b41c items=0 ppid=3120 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.378000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 04:10:33.384000 audit[3196]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.384000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffda429d2e0 a2=0 a3=7ffda429d2cc items=0 ppid=3120 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.384000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 04:10:33.386000 audit[3197]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.386000 audit[3197]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc86d16a70 a2=0 a3=7ffc86d16a5c items=0 ppid=3120 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.386000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 04:10:33.390000 audit[3199]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.390000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe4d2e3240 a2=0 a3=7ffe4d2e322c items=0 ppid=3120 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.390000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 04:10:33.392000 audit[3200]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.392000 audit[3200]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbeecc3d0 a2=0 a3=7fffbeecc3bc items=0 ppid=3120 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 04:10:33.396000 audit[3202]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.396000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc7824c340 a2=0 a3=7ffc7824c32c items=0 ppid=3120 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.396000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 04:10:33.402000 audit[3205]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.402000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcf4f47130 a2=0 a3=7ffcf4f4711c items=0 ppid=3120 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 04:10:33.404000 audit[3206]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3206 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.404000 audit[3206]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd42789e60 a2=0 a3=7ffd42789e4c items=0 ppid=3120 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 04:10:33.408000 audit[3208]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3208 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.408000 audit[3208]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdf1f74730 a2=0 a3=7ffdf1f7471c items=0 ppid=3120 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.408000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 04:10:33.411000 audit[3209]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.411000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9ce961b0 a2=0 a3=7ffe9ce9619c items=0 ppid=3120 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.411000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 04:10:33.415000 audit[3211]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.415000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc8b2b00d0 a2=0 a3=7ffc8b2b00bc items=0 ppid=3120 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 04:10:33.421000 audit[3214]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.421000 audit[3214]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc01f2b40 a2=0 a3=7ffcc01f2b2c items=0 ppid=3120 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.421000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 04:10:33.427000 audit[3217]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.427000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd1b6b0c60 a2=0 a3=7ffd1b6b0c4c items=0 ppid=3120 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.427000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 04:10:33.429000 audit[3218]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.429000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffee6b06a80 a2=0 a3=7ffee6b06a6c items=0 ppid=3120 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.429000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 04:10:33.433000 audit[3220]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3220 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.433000 audit[3220]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdd47278a0 a2=0 a3=7ffdd472788c items=0 ppid=3120 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.433000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 04:10:33.439000 audit[3223]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.439000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffde0d94c00 a2=0 a3=7ffde0d94bec items=0 ppid=3120 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.439000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 04:10:33.441000 audit[3224]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.441000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc06639190 a2=0 a3=7ffc0663917c items=0 ppid=3120 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.441000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 04:10:33.445000 audit[3226]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 04:10:33.445000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fffffd24320 a2=0 a3=7fffffd2430c items=0 ppid=3120 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 04:10:33.477000 audit[3232]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:33.477000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc96fd7900 a2=0 a3=7ffc96fd78ec items=0 ppid=3120 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.477000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:33.490000 audit[3232]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:33.490000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc96fd7900 a2=0 a3=7ffc96fd78ec items=0 ppid=3120 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.490000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:33.494000 audit[3237]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.494000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffeb2472c40 a2=0 a3=7ffeb2472c2c items=0 ppid=3120 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.494000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 04:10:33.499000 audit[3239]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.499000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffee0d96b30 a2=0 a3=7ffee0d96b1c items=0 ppid=3120 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.499000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 04:10:33.505000 audit[3242]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.505000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe8cc6a500 a2=0 a3=7ffe8cc6a4ec items=0 ppid=3120 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.505000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 04:10:33.506000 audit[3243]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3243 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.506000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe97c3bc0 a2=0 a3=7fffe97c3bac items=0 ppid=3120 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.506000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 04:10:33.510000 audit[3245]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.510000 audit[3245]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc6900ac40 a2=0 a3=7ffc6900ac2c items=0 ppid=3120 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.510000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 04:10:33.512000 audit[3246]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.512000 audit[3246]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3537f510 a2=0 a3=7ffe3537f4fc items=0 ppid=3120 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.512000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 04:10:33.516000 audit[3248]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.516000 audit[3248]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdb0df27e0 a2=0 a3=7ffdb0df27cc items=0 ppid=3120 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.516000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 04:10:33.522000 audit[3251]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.522000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc59b369d0 a2=0 a3=7ffc59b369bc items=0 ppid=3120 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.522000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 04:10:33.524000 audit[3252]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3252 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.524000 audit[3252]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7aa14950 a2=0 a3=7ffe7aa1493c items=0 ppid=3120 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.524000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 04:10:33.529000 audit[3254]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3254 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.529000 audit[3254]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe50ec8b40 a2=0 a3=7ffe50ec8b2c items=0 ppid=3120 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.529000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 04:10:33.531000 audit[3255]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.531000 audit[3255]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff0f97ec90 a2=0 a3=7fff0f97ec7c items=0 ppid=3120 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.531000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 04:10:33.535000 audit[3257]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3257 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.535000 audit[3257]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc5da85ed0 a2=0 a3=7ffc5da85ebc items=0 ppid=3120 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.535000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 04:10:33.545000 audit[3260]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.545000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4af8b260 a2=0 a3=7ffe4af8b24c items=0 ppid=3120 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.545000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 04:10:33.551000 audit[3263]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3263 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.551000 audit[3263]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffebac8e7c0 a2=0 a3=7ffebac8e7ac items=0 ppid=3120 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.551000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 04:10:33.553000 audit[3264]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.553000 audit[3264]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe1f3a8160 a2=0 a3=7ffe1f3a814c items=0 ppid=3120 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.553000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 04:10:33.558000 audit[3266]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.558000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe7e4ef3f0 a2=0 a3=7ffe7e4ef3dc items=0 ppid=3120 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 04:10:33.564000 audit[3269]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.564000 audit[3269]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffde91ee880 a2=0 a3=7ffde91ee86c items=0 ppid=3120 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.564000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 04:10:33.566000 audit[3270]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3270 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.566000 audit[3270]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe43e14440 a2=0 a3=7ffe43e1442c items=0 ppid=3120 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 04:10:33.570000 audit[3272]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.570000 audit[3272]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff4b129310 a2=0 a3=7fff4b1292fc items=0 ppid=3120 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.570000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 04:10:33.572000 audit[3273]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3273 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.572000 audit[3273]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd66442bd0 a2=0 a3=7ffd66442bbc items=0 ppid=3120 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.572000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 04:10:33.576000 audit[3275]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.576000 audit[3275]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc10138370 a2=0 a3=7ffc1013835c items=0 ppid=3120 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.576000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 04:10:33.582000 audit[3278]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 04:10:33.582000 audit[3278]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc207879a0 a2=0 a3=7ffc2078798c items=0 ppid=3120 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 04:10:33.587000 audit[3280]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 04:10:33.587000 audit[3280]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffda7938860 a2=0 a3=7ffda793884c items=0 ppid=3120 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.587000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:33.588000 audit[3280]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 04:10:33.588000 audit[3280]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffda7938860 a2=0 a3=7ffda793884c items=0 ppid=3120 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:33.588000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:33.633616 kubelet[2982]: I1216 04:10:33.633389 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zv2c9" podStartSLOduration=2.632351599 podStartE2EDuration="2.632351599s" podCreationTimestamp="2025-12-16 04:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 04:10:33.631703601 +0000 UTC m=+8.343446221" watchObservedRunningTime="2025-12-16 04:10:33.632351599 +0000 UTC m=+8.344094221" Dec 16 04:10:36.352242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608327474.mount: Deactivated successfully. Dec 16 04:10:37.801175 containerd[1631]: time="2025-12-16T04:10:37.801117257Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:10:37.802524 containerd[1631]: time="2025-12-16T04:10:37.802209756Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23560844" Dec 16 04:10:37.803210 containerd[1631]: time="2025-12-16T04:10:37.803169852Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:10:37.832119 containerd[1631]: time="2025-12-16T04:10:37.832020248Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:10:37.833131 containerd[1631]: time="2025-12-16T04:10:37.833076629Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 5.106287142s" Dec 16 04:10:37.833208 containerd[1631]: time="2025-12-16T04:10:37.833141979Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 04:10:37.837816 containerd[1631]: time="2025-12-16T04:10:37.837768027Z" level=info msg="CreateContainer within sandbox \"792e7b6a2dcc037a233c6605be21a14883b937b99a17eddc61481b4a56d64965\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 04:10:37.861493 containerd[1631]: time="2025-12-16T04:10:37.861192044Z" level=info msg="Container 55c64da291ac765d0565a15c792337b861c5a30f97cf51fa0ad8f8be77cec79d: CDI devices from CRI Config.CDIDevices: []" Dec 16 04:10:37.861596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1136255996.mount: Deactivated successfully. Dec 16 04:10:37.868640 containerd[1631]: time="2025-12-16T04:10:37.868540392Z" level=info msg="CreateContainer within sandbox \"792e7b6a2dcc037a233c6605be21a14883b937b99a17eddc61481b4a56d64965\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"55c64da291ac765d0565a15c792337b861c5a30f97cf51fa0ad8f8be77cec79d\"" Dec 16 04:10:37.869597 containerd[1631]: time="2025-12-16T04:10:37.869333689Z" level=info msg="StartContainer for \"55c64da291ac765d0565a15c792337b861c5a30f97cf51fa0ad8f8be77cec79d\"" Dec 16 04:10:37.871679 containerd[1631]: time="2025-12-16T04:10:37.871556497Z" level=info msg="connecting to shim 55c64da291ac765d0565a15c792337b861c5a30f97cf51fa0ad8f8be77cec79d" address="unix:///run/containerd/s/32948dfbb9b620d12af8099b6247c093e6242acccad0dcedf3ddf0c8c59bed58" protocol=ttrpc version=3 Dec 16 04:10:37.910689 systemd[1]: Started cri-containerd-55c64da291ac765d0565a15c792337b861c5a30f97cf51fa0ad8f8be77cec79d.scope - libcontainer container 55c64da291ac765d0565a15c792337b861c5a30f97cf51fa0ad8f8be77cec79d. Dec 16 04:10:37.944278 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 04:10:37.944470 kernel: audit: type=1334 audit(1765858237.935:517): prog-id=150 op=LOAD Dec 16 04:10:37.935000 audit: BPF prog-id=150 op=LOAD Dec 16 04:10:37.942000 audit: BPF prog-id=151 op=LOAD Dec 16 04:10:37.945824 kernel: audit: type=1334 audit(1765858237.942:518): prog-id=151 op=LOAD Dec 16 04:10:37.942000 audit[3289]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3083 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:37.948527 kernel: audit: type=1300 audit(1765858237.942:518): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3083 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:37.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633634646132393161633736356430353635613135633739323333 Dec 16 04:10:37.953745 kernel: audit: type=1327 audit(1765858237.942:518): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633634646132393161633736356430353635613135633739323333 Dec 16 04:10:37.942000 audit: BPF prog-id=151 op=UNLOAD Dec 16 04:10:37.958069 kernel: audit: type=1334 audit(1765858237.942:519): prog-id=151 op=UNLOAD Dec 16 04:10:37.942000 audit[3289]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3083 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:37.960201 kernel: audit: type=1300 audit(1765858237.942:519): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3083 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:37.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633634646132393161633736356430353635613135633739323333 Dec 16 04:10:37.965306 kernel: audit: type=1327 audit(1765858237.942:519): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633634646132393161633736356430353635613135633739323333 Dec 16 04:10:37.942000 audit: BPF prog-id=152 op=LOAD Dec 16 04:10:37.942000 audit[3289]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3083 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:37.975032 kernel: audit: type=1334 audit(1765858237.942:520): prog-id=152 op=LOAD Dec 16 04:10:37.975132 kernel: audit: type=1300 audit(1765858237.942:520): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3083 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:37.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633634646132393161633736356430353635613135633739323333 Dec 16 04:10:37.985503 kernel: audit: type=1327 audit(1765858237.942:520): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633634646132393161633736356430353635613135633739323333 Dec 16 04:10:37.942000 audit: BPF prog-id=153 op=LOAD Dec 16 04:10:37.942000 audit[3289]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3083 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:37.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633634646132393161633736356430353635613135633739323333 Dec 16 04:10:37.942000 audit: BPF prog-id=153 op=UNLOAD Dec 16 04:10:37.942000 audit[3289]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3083 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:37.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633634646132393161633736356430353635613135633739323333 Dec 16 04:10:37.942000 audit: BPF prog-id=152 op=UNLOAD Dec 16 04:10:37.942000 audit[3289]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3083 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:37.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633634646132393161633736356430353635613135633739323333 Dec 16 04:10:37.942000 audit: BPF prog-id=154 op=LOAD Dec 16 04:10:37.942000 audit[3289]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3083 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:37.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633634646132393161633736356430353635613135633739323333 Dec 16 04:10:38.011178 containerd[1631]: time="2025-12-16T04:10:38.011125651Z" level=info msg="StartContainer for \"55c64da291ac765d0565a15c792337b861c5a30f97cf51fa0ad8f8be77cec79d\" returns successfully" Dec 16 04:10:43.556774 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 16 04:10:43.557720 kernel: audit: type=1106 audit(1765858243.546:525): pid=1951 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 04:10:43.546000 audit[1951]: USER_END pid=1951 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 04:10:43.548195 sudo[1951]: pam_unix(sudo:session): session closed for user root Dec 16 04:10:43.569905 kernel: audit: type=1104 audit(1765858243.547:526): pid=1951 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 04:10:43.547000 audit[1951]: CRED_DISP pid=1951 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 04:10:43.718612 sshd[1950]: Connection closed by 139.178.89.65 port 37516 Dec 16 04:10:43.721033 sshd-session[1946]: pam_unix(sshd:session): session closed for user core Dec 16 04:10:43.734549 kernel: audit: type=1106 audit(1765858243.725:527): pid=1946 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:10:43.725000 audit[1946]: USER_END pid=1946 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:10:43.733548 systemd[1]: sshd@8-10.230.69.46:22-139.178.89.65:37516.service: Deactivated successfully. Dec 16 04:10:43.744886 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 04:10:43.725000 audit[1946]: CRED_DISP pid=1946 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:10:43.748070 systemd[1]: session-12.scope: Consumed 7.845s CPU time, 153M memory peak. Dec 16 04:10:43.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.69.46:22-139.178.89.65:37516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:43.751653 kernel: audit: type=1104 audit(1765858243.725:528): pid=1946 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:10:43.751716 kernel: audit: type=1131 audit(1765858243.732:529): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.69.46:22-139.178.89.65:37516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:10:43.755468 systemd-logind[1613]: Session 12 logged out. Waiting for processes to exit. Dec 16 04:10:43.760217 systemd-logind[1613]: Removed session 12. Dec 16 04:10:44.348000 audit[3372]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:44.353400 kernel: audit: type=1325 audit(1765858244.348:530): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:44.348000 audit[3372]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffd626b840 a2=0 a3=7fffd626b82c items=0 ppid=3120 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:44.360430 kernel: audit: type=1300 audit(1765858244.348:530): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffd626b840 a2=0 a3=7fffd626b82c items=0 ppid=3120 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:44.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:44.366399 kernel: audit: type=1327 audit(1765858244.348:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:44.359000 audit[3372]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:44.371396 kernel: audit: type=1325 audit(1765858244.359:531): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:44.359000 audit[3372]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffd626b840 a2=0 a3=0 items=0 ppid=3120 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:44.380426 kernel: audit: type=1300 audit(1765858244.359:531): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffd626b840 a2=0 a3=0 items=0 ppid=3120 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:44.359000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:44.403000 audit[3374]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3374 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:44.403000 audit[3374]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd7958d010 a2=0 a3=7ffd7958cffc items=0 ppid=3120 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:44.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:44.408000 audit[3374]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3374 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:44.408000 audit[3374]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd7958d010 a2=0 a3=0 items=0 ppid=3120 pid=3374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:44.408000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:48.454000 audit[3377]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:48.454000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc8b1bfb50 a2=0 a3=7ffc8b1bfb3c items=0 ppid=3120 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:48.454000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:48.459000 audit[3377]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:48.459000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc8b1bfb50 a2=0 a3=0 items=0 ppid=3120 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:48.459000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:48.582000 audit[3379]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:48.584781 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 16 04:10:48.584852 kernel: audit: type=1325 audit(1765858248.582:536): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:48.582000 audit[3379]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc33a7ac60 a2=0 a3=7ffc33a7ac4c items=0 ppid=3120 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:48.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:48.596328 kernel: audit: type=1300 audit(1765858248.582:536): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc33a7ac60 a2=0 a3=7ffc33a7ac4c items=0 ppid=3120 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:48.596429 kernel: audit: type=1327 audit(1765858248.582:536): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:48.593000 audit[3379]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:48.593000 audit[3379]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc33a7ac60 a2=0 a3=0 items=0 ppid=3120 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:48.603947 kernel: audit: type=1325 audit(1765858248.593:537): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:48.604046 kernel: audit: type=1300 audit(1765858248.593:537): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc33a7ac60 a2=0 a3=0 items=0 ppid=3120 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:48.610409 kernel: audit: type=1327 audit(1765858248.593:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:48.593000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:49.610000 audit[3381]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:49.615422 kernel: audit: type=1325 audit(1765858249.610:538): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:49.610000 audit[3381]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffc8ec8830 a2=0 a3=7fffc8ec881c items=0 ppid=3120 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:49.622498 kernel: audit: type=1300 audit(1765858249.610:538): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffc8ec8830 a2=0 a3=7fffc8ec881c items=0 ppid=3120 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:49.610000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:49.627446 kernel: audit: type=1327 audit(1765858249.610:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:49.635000 audit[3381]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:49.640488 kernel: audit: type=1325 audit(1765858249.635:539): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:49.635000 audit[3381]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffc8ec8830 a2=0 a3=0 items=0 ppid=3120 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:49.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:50.646000 audit[3383]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:50.646000 audit[3383]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff18cfdd10 a2=0 a3=7fff18cfdcfc items=0 ppid=3120 pid=3383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:50.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:50.652000 audit[3383]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:50.652000 audit[3383]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff18cfdd10 a2=0 a3=0 items=0 ppid=3120 pid=3383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:50.652000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:50.659512 kubelet[2982]: I1216 04:10:50.659292 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-dx96r" podStartSLOduration=13.549677406 podStartE2EDuration="18.65923771s" podCreationTimestamp="2025-12-16 04:10:32 +0000 UTC" firstStartedPulling="2025-12-16 04:10:32.725457573 +0000 UTC m=+7.437200177" lastFinishedPulling="2025-12-16 04:10:37.835017866 +0000 UTC m=+12.546760481" observedRunningTime="2025-12-16 04:10:38.658505152 +0000 UTC m=+13.370247774" watchObservedRunningTime="2025-12-16 04:10:50.65923771 +0000 UTC m=+25.370980324" Dec 16 04:10:50.675395 kubelet[2982]: W1216 04:10:50.674627 2982 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-cuii1.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-cuii1.gb1.brightbox.com' and this object Dec 16 04:10:50.675395 kubelet[2982]: E1216 04:10:50.674692 2982 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-cuii1.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-cuii1.gb1.brightbox.com' and this object" logger="UnhandledError" Dec 16 04:10:50.675395 kubelet[2982]: W1216 04:10:50.674769 2982 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:srv-cuii1.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'srv-cuii1.gb1.brightbox.com' and this object Dec 16 04:10:50.675395 kubelet[2982]: E1216 04:10:50.674792 2982 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:srv-cuii1.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-cuii1.gb1.brightbox.com' and this object" logger="UnhandledError" Dec 16 04:10:50.675656 kubelet[2982]: I1216 04:10:50.674856 2982 status_manager.go:890] "Failed to get status for pod" podUID="8e3fd0ea-a68f-4e5b-bfa5-1591b2cc7dcf" pod="calico-system/calico-typha-795c8588fc-4f2xg" err="pods \"calico-typha-795c8588fc-4f2xg\" is forbidden: User \"system:node:srv-cuii1.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-cuii1.gb1.brightbox.com' and this object" Dec 16 04:10:50.675656 kubelet[2982]: W1216 04:10:50.674952 2982 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:srv-cuii1.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-cuii1.gb1.brightbox.com' and this object Dec 16 04:10:50.675656 kubelet[2982]: E1216 04:10:50.674978 2982 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:srv-cuii1.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-cuii1.gb1.brightbox.com' and this object" logger="UnhandledError" Dec 16 04:10:50.676185 systemd[1]: Created slice kubepods-besteffort-pod8e3fd0ea_a68f_4e5b_bfa5_1591b2cc7dcf.slice - libcontainer container kubepods-besteffort-pod8e3fd0ea_a68f_4e5b_bfa5_1591b2cc7dcf.slice. Dec 16 04:10:50.793765 kubelet[2982]: I1216 04:10:50.793654 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkvqk\" (UniqueName: \"kubernetes.io/projected/8e3fd0ea-a68f-4e5b-bfa5-1591b2cc7dcf-kube-api-access-vkvqk\") pod \"calico-typha-795c8588fc-4f2xg\" (UID: \"8e3fd0ea-a68f-4e5b-bfa5-1591b2cc7dcf\") " pod="calico-system/calico-typha-795c8588fc-4f2xg" Dec 16 04:10:50.794174 kubelet[2982]: I1216 04:10:50.794069 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e3fd0ea-a68f-4e5b-bfa5-1591b2cc7dcf-tigera-ca-bundle\") pod \"calico-typha-795c8588fc-4f2xg\" (UID: \"8e3fd0ea-a68f-4e5b-bfa5-1591b2cc7dcf\") " pod="calico-system/calico-typha-795c8588fc-4f2xg" Dec 16 04:10:50.794396 kubelet[2982]: I1216 04:10:50.794278 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8e3fd0ea-a68f-4e5b-bfa5-1591b2cc7dcf-typha-certs\") pod \"calico-typha-795c8588fc-4f2xg\" (UID: \"8e3fd0ea-a68f-4e5b-bfa5-1591b2cc7dcf\") " pod="calico-system/calico-typha-795c8588fc-4f2xg" Dec 16 04:10:50.853149 systemd[1]: Created slice kubepods-besteffort-pod1c7f48d4_539e_40a5_bb48_57ed04850cf8.slice - libcontainer container kubepods-besteffort-pod1c7f48d4_539e_40a5_bb48_57ed04850cf8.slice. Dec 16 04:10:50.895211 kubelet[2982]: I1216 04:10:50.895160 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c7f48d4-539e-40a5-bb48-57ed04850cf8-tigera-ca-bundle\") pod \"calico-node-zqj5q\" (UID: \"1c7f48d4-539e-40a5-bb48-57ed04850cf8\") " pod="calico-system/calico-node-zqj5q" Dec 16 04:10:50.895396 kubelet[2982]: I1216 04:10:50.895223 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c7f48d4-539e-40a5-bb48-57ed04850cf8-xtables-lock\") pod \"calico-node-zqj5q\" (UID: \"1c7f48d4-539e-40a5-bb48-57ed04850cf8\") " pod="calico-system/calico-node-zqj5q" Dec 16 04:10:50.895396 kubelet[2982]: I1216 04:10:50.895255 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1c7f48d4-539e-40a5-bb48-57ed04850cf8-cni-net-dir\") pod \"calico-node-zqj5q\" (UID: \"1c7f48d4-539e-40a5-bb48-57ed04850cf8\") " pod="calico-system/calico-node-zqj5q" Dec 16 04:10:50.895396 kubelet[2982]: I1216 04:10:50.895281 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1c7f48d4-539e-40a5-bb48-57ed04850cf8-var-lib-calico\") pod \"calico-node-zqj5q\" (UID: \"1c7f48d4-539e-40a5-bb48-57ed04850cf8\") " pod="calico-system/calico-node-zqj5q" Dec 16 04:10:50.895396 kubelet[2982]: I1216 04:10:50.895333 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1c7f48d4-539e-40a5-bb48-57ed04850cf8-cni-log-dir\") pod \"calico-node-zqj5q\" (UID: \"1c7f48d4-539e-40a5-bb48-57ed04850cf8\") " pod="calico-system/calico-node-zqj5q" Dec 16 04:10:50.895592 kubelet[2982]: I1216 04:10:50.895372 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1c7f48d4-539e-40a5-bb48-57ed04850cf8-policysync\") pod \"calico-node-zqj5q\" (UID: \"1c7f48d4-539e-40a5-bb48-57ed04850cf8\") " pod="calico-system/calico-node-zqj5q" Dec 16 04:10:50.895592 kubelet[2982]: I1216 04:10:50.895471 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1c7f48d4-539e-40a5-bb48-57ed04850cf8-flexvol-driver-host\") pod \"calico-node-zqj5q\" (UID: \"1c7f48d4-539e-40a5-bb48-57ed04850cf8\") " pod="calico-system/calico-node-zqj5q" Dec 16 04:10:50.895592 kubelet[2982]: I1216 04:10:50.895504 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1c7f48d4-539e-40a5-bb48-57ed04850cf8-var-run-calico\") pod \"calico-node-zqj5q\" (UID: \"1c7f48d4-539e-40a5-bb48-57ed04850cf8\") " pod="calico-system/calico-node-zqj5q" Dec 16 04:10:50.895592 kubelet[2982]: I1216 04:10:50.895530 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57zjp\" (UniqueName: \"kubernetes.io/projected/1c7f48d4-539e-40a5-bb48-57ed04850cf8-kube-api-access-57zjp\") pod \"calico-node-zqj5q\" (UID: \"1c7f48d4-539e-40a5-bb48-57ed04850cf8\") " pod="calico-system/calico-node-zqj5q" Dec 16 04:10:50.895592 kubelet[2982]: I1216 04:10:50.895570 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1c7f48d4-539e-40a5-bb48-57ed04850cf8-cni-bin-dir\") pod \"calico-node-zqj5q\" (UID: \"1c7f48d4-539e-40a5-bb48-57ed04850cf8\") " pod="calico-system/calico-node-zqj5q" Dec 16 04:10:50.895767 kubelet[2982]: I1216 04:10:50.895610 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c7f48d4-539e-40a5-bb48-57ed04850cf8-lib-modules\") pod \"calico-node-zqj5q\" (UID: \"1c7f48d4-539e-40a5-bb48-57ed04850cf8\") " pod="calico-system/calico-node-zqj5q" Dec 16 04:10:50.895767 kubelet[2982]: I1216 04:10:50.895636 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1c7f48d4-539e-40a5-bb48-57ed04850cf8-node-certs\") pod \"calico-node-zqj5q\" (UID: \"1c7f48d4-539e-40a5-bb48-57ed04850cf8\") " pod="calico-system/calico-node-zqj5q" Dec 16 04:10:51.011782 kubelet[2982]: E1216 04:10:51.011537 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.011782 kubelet[2982]: W1216 04:10:51.011683 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.012197 kubelet[2982]: E1216 04:10:51.011881 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.045410 kubelet[2982]: E1216 04:10:51.045177 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:10:51.054124 kubelet[2982]: E1216 04:10:51.053968 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.054124 kubelet[2982]: W1216 04:10:51.054013 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.054124 kubelet[2982]: E1216 04:10:51.054041 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.054695 kubelet[2982]: E1216 04:10:51.054591 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.054695 kubelet[2982]: W1216 04:10:51.054609 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.054695 kubelet[2982]: E1216 04:10:51.054626 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.056331 kubelet[2982]: E1216 04:10:51.056072 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.056331 kubelet[2982]: W1216 04:10:51.056105 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.056331 kubelet[2982]: E1216 04:10:51.056124 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.076338 kubelet[2982]: E1216 04:10:51.076300 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.076670 kubelet[2982]: W1216 04:10:51.076510 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.076670 kubelet[2982]: E1216 04:10:51.076545 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.077203 kubelet[2982]: E1216 04:10:51.077182 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.077433 kubelet[2982]: W1216 04:10:51.077305 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.077433 kubelet[2982]: E1216 04:10:51.077334 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.077854 kubelet[2982]: E1216 04:10:51.077758 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.077854 kubelet[2982]: W1216 04:10:51.077777 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.077854 kubelet[2982]: E1216 04:10:51.077795 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.078400 kubelet[2982]: E1216 04:10:51.078221 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.078400 kubelet[2982]: W1216 04:10:51.078240 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.078400 kubelet[2982]: E1216 04:10:51.078256 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.078877 kubelet[2982]: E1216 04:10:51.078764 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.078877 kubelet[2982]: W1216 04:10:51.078795 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.078877 kubelet[2982]: E1216 04:10:51.078811 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.079338 kubelet[2982]: E1216 04:10:51.079247 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.079338 kubelet[2982]: W1216 04:10:51.079264 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.079338 kubelet[2982]: E1216 04:10:51.079280 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.079892 kubelet[2982]: E1216 04:10:51.079801 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.079892 kubelet[2982]: W1216 04:10:51.079819 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.079892 kubelet[2982]: E1216 04:10:51.079836 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.080350 kubelet[2982]: E1216 04:10:51.080234 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.080350 kubelet[2982]: W1216 04:10:51.080252 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.080350 kubelet[2982]: E1216 04:10:51.080284 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.080852 kubelet[2982]: E1216 04:10:51.080708 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.080852 kubelet[2982]: W1216 04:10:51.080726 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.080852 kubelet[2982]: E1216 04:10:51.080742 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.081293 kubelet[2982]: E1216 04:10:51.081204 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.081293 kubelet[2982]: W1216 04:10:51.081222 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.081293 kubelet[2982]: E1216 04:10:51.081238 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.081794 kubelet[2982]: E1216 04:10:51.081667 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.081794 kubelet[2982]: W1216 04:10:51.081685 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.081794 kubelet[2982]: E1216 04:10:51.081701 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.082170 kubelet[2982]: E1216 04:10:51.082063 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.082170 kubelet[2982]: W1216 04:10:51.082080 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.082170 kubelet[2982]: E1216 04:10:51.082110 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.082622 kubelet[2982]: E1216 04:10:51.082519 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.082622 kubelet[2982]: W1216 04:10:51.082537 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.082622 kubelet[2982]: E1216 04:10:51.082552 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.083329 kubelet[2982]: E1216 04:10:51.083040 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.083329 kubelet[2982]: W1216 04:10:51.083191 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.083329 kubelet[2982]: E1216 04:10:51.083256 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.083822 kubelet[2982]: E1216 04:10:51.083802 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.083987 kubelet[2982]: W1216 04:10:51.083905 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.083987 kubelet[2982]: E1216 04:10:51.083929 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.084536 kubelet[2982]: E1216 04:10:51.084440 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.084536 kubelet[2982]: W1216 04:10:51.084459 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.084536 kubelet[2982]: E1216 04:10:51.084475 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.085054 kubelet[2982]: E1216 04:10:51.084945 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.085054 kubelet[2982]: W1216 04:10:51.084963 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.085054 kubelet[2982]: E1216 04:10:51.084979 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.097203 kubelet[2982]: E1216 04:10:51.097137 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.097507 kubelet[2982]: W1216 04:10:51.097164 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.097507 kubelet[2982]: E1216 04:10:51.097305 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.098340 kubelet[2982]: I1216 04:10:51.097577 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d6d2249c-912c-448c-8aa3-089c6b8243d1-registration-dir\") pod \"csi-node-driver-cd92j\" (UID: \"d6d2249c-912c-448c-8aa3-089c6b8243d1\") " pod="calico-system/csi-node-driver-cd92j" Dec 16 04:10:51.099030 kubelet[2982]: E1216 04:10:51.098919 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.099030 kubelet[2982]: W1216 04:10:51.098939 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.099030 kubelet[2982]: E1216 04:10:51.098966 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.099551 kubelet[2982]: E1216 04:10:51.099509 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.099551 kubelet[2982]: W1216 04:10:51.099528 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.099840 kubelet[2982]: E1216 04:10:51.099697 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.100023 kubelet[2982]: E1216 04:10:51.100003 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.100186 kubelet[2982]: W1216 04:10:51.100135 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.100186 kubelet[2982]: E1216 04:10:51.100161 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.100368 kubelet[2982]: I1216 04:10:51.100329 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d6d2249c-912c-448c-8aa3-089c6b8243d1-socket-dir\") pod \"csi-node-driver-cd92j\" (UID: \"d6d2249c-912c-448c-8aa3-089c6b8243d1\") " pod="calico-system/csi-node-driver-cd92j" Dec 16 04:10:51.101337 kubelet[2982]: E1216 04:10:51.101295 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.101337 kubelet[2982]: W1216 04:10:51.101314 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.101621 kubelet[2982]: E1216 04:10:51.101519 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.102213 kubelet[2982]: E1216 04:10:51.102168 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.102213 kubelet[2982]: W1216 04:10:51.102188 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.102538 kubelet[2982]: E1216 04:10:51.102343 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.102759 kubelet[2982]: I1216 04:10:51.102701 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6d2249c-912c-448c-8aa3-089c6b8243d1-kubelet-dir\") pod \"csi-node-driver-cd92j\" (UID: \"d6d2249c-912c-448c-8aa3-089c6b8243d1\") " pod="calico-system/csi-node-driver-cd92j" Dec 16 04:10:51.102918 kubelet[2982]: E1216 04:10:51.102899 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.103128 kubelet[2982]: W1216 04:10:51.103003 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.103128 kubelet[2982]: E1216 04:10:51.103026 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.104246 kubelet[2982]: E1216 04:10:51.104201 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.104246 kubelet[2982]: W1216 04:10:51.104221 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.104535 kubelet[2982]: E1216 04:10:51.104433 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.104843 kubelet[2982]: E1216 04:10:51.104802 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.104843 kubelet[2982]: W1216 04:10:51.104820 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.105063 kubelet[2982]: E1216 04:10:51.105039 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.105610 kubelet[2982]: E1216 04:10:51.105428 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.105610 kubelet[2982]: W1216 04:10:51.105446 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.105610 kubelet[2982]: E1216 04:10:51.105462 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.105610 kubelet[2982]: I1216 04:10:51.105493 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d6d2249c-912c-448c-8aa3-089c6b8243d1-varrun\") pod \"csi-node-driver-cd92j\" (UID: \"d6d2249c-912c-448c-8aa3-089c6b8243d1\") " pod="calico-system/csi-node-driver-cd92j" Dec 16 04:10:51.106040 kubelet[2982]: E1216 04:10:51.105996 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.106040 kubelet[2982]: W1216 04:10:51.106017 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.107216 kubelet[2982]: E1216 04:10:51.107070 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.107538 kubelet[2982]: E1216 04:10:51.107342 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.107538 kubelet[2982]: W1216 04:10:51.107356 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.107538 kubelet[2982]: E1216 04:10:51.107405 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.107741 kubelet[2982]: I1216 04:10:51.107716 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhg5p\" (UniqueName: \"kubernetes.io/projected/d6d2249c-912c-448c-8aa3-089c6b8243d1-kube-api-access-fhg5p\") pod \"csi-node-driver-cd92j\" (UID: \"d6d2249c-912c-448c-8aa3-089c6b8243d1\") " pod="calico-system/csi-node-driver-cd92j" Dec 16 04:10:51.107957 kubelet[2982]: E1216 04:10:51.107902 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.107957 kubelet[2982]: W1216 04:10:51.107919 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.107957 kubelet[2982]: E1216 04:10:51.107934 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.108572 kubelet[2982]: E1216 04:10:51.108480 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.108572 kubelet[2982]: W1216 04:10:51.108499 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.108572 kubelet[2982]: E1216 04:10:51.108515 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.109024 kubelet[2982]: E1216 04:10:51.109002 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.109197 kubelet[2982]: W1216 04:10:51.109139 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.109197 kubelet[2982]: E1216 04:10:51.109164 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.209837 kubelet[2982]: E1216 04:10:51.209733 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.209837 kubelet[2982]: W1216 04:10:51.209770 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.209837 kubelet[2982]: E1216 04:10:51.209798 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.210850 kubelet[2982]: E1216 04:10:51.210804 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.210850 kubelet[2982]: W1216 04:10:51.210825 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.211264 kubelet[2982]: E1216 04:10:51.210998 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.211569 kubelet[2982]: E1216 04:10:51.211534 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.211808 kubelet[2982]: W1216 04:10:51.211644 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.211808 kubelet[2982]: E1216 04:10:51.211679 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.212054 kubelet[2982]: E1216 04:10:51.212031 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.212054 kubelet[2982]: W1216 04:10:51.212053 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.212595 kubelet[2982]: E1216 04:10:51.212093 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.213024 kubelet[2982]: E1216 04:10:51.213001 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.213024 kubelet[2982]: W1216 04:10:51.213022 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.213174 kubelet[2982]: E1216 04:10:51.213047 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.213554 kubelet[2982]: E1216 04:10:51.213532 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.213554 kubelet[2982]: W1216 04:10:51.213552 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.213795 kubelet[2982]: E1216 04:10:51.213645 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.214328 kubelet[2982]: E1216 04:10:51.214288 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.215634 kubelet[2982]: W1216 04:10:51.215445 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.215769 kubelet[2982]: E1216 04:10:51.215747 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.216036 kubelet[2982]: E1216 04:10:51.215994 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.216036 kubelet[2982]: W1216 04:10:51.216013 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.216342 kubelet[2982]: E1216 04:10:51.216307 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.216606 kubelet[2982]: E1216 04:10:51.216586 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.216770 kubelet[2982]: W1216 04:10:51.216667 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.216964 kubelet[2982]: E1216 04:10:51.216926 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.217346 kubelet[2982]: E1216 04:10:51.217304 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.217346 kubelet[2982]: W1216 04:10:51.217323 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.217624 kubelet[2982]: E1216 04:10:51.217581 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.218436 kubelet[2982]: E1216 04:10:51.218044 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.218436 kubelet[2982]: W1216 04:10:51.218128 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.218679 kubelet[2982]: E1216 04:10:51.218641 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.218679 kubelet[2982]: W1216 04:10:51.218657 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.218955 kubelet[2982]: E1216 04:10:51.218883 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.218955 kubelet[2982]: E1216 04:10:51.218914 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.219580 kubelet[2982]: E1216 04:10:51.219557 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.219580 kubelet[2982]: W1216 04:10:51.219577 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.219712 kubelet[2982]: E1216 04:10:51.219674 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.219944 kubelet[2982]: E1216 04:10:51.219924 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.219944 kubelet[2982]: W1216 04:10:51.219943 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.220201 kubelet[2982]: E1216 04:10:51.220094 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.220288 kubelet[2982]: E1216 04:10:51.220265 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.220288 kubelet[2982]: W1216 04:10:51.220285 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.220484 kubelet[2982]: E1216 04:10:51.220440 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.221487 kubelet[2982]: E1216 04:10:51.221439 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.221487 kubelet[2982]: W1216 04:10:51.221459 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.222538 kubelet[2982]: E1216 04:10:51.222501 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.222681 kubelet[2982]: E1216 04:10:51.222662 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.222807 kubelet[2982]: W1216 04:10:51.222775 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.223000 kubelet[2982]: E1216 04:10:51.222976 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.223700 kubelet[2982]: E1216 04:10:51.223546 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.223700 kubelet[2982]: W1216 04:10:51.223564 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.223978 kubelet[2982]: E1216 04:10:51.223957 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.224326 kubelet[2982]: E1216 04:10:51.224239 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.224326 kubelet[2982]: W1216 04:10:51.224285 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.224814 kubelet[2982]: E1216 04:10:51.224673 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.225468 kubelet[2982]: E1216 04:10:51.225411 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.225468 kubelet[2982]: W1216 04:10:51.225431 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.225902 kubelet[2982]: E1216 04:10:51.225876 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.226887 kubelet[2982]: E1216 04:10:51.226539 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.226887 kubelet[2982]: W1216 04:10:51.226558 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.227192 kubelet[2982]: E1216 04:10:51.227152 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.227483 kubelet[2982]: E1216 04:10:51.227443 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.227483 kubelet[2982]: W1216 04:10:51.227461 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.228493 kubelet[2982]: E1216 04:10:51.227663 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.228864 kubelet[2982]: E1216 04:10:51.228843 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.228961 kubelet[2982]: W1216 04:10:51.228940 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.229710 kubelet[2982]: E1216 04:10:51.229685 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.230098 kubelet[2982]: E1216 04:10:51.230065 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.230098 kubelet[2982]: W1216 04:10:51.230095 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.230270 kubelet[2982]: E1216 04:10:51.230200 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.230627 kubelet[2982]: E1216 04:10:51.230604 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.230627 kubelet[2982]: W1216 04:10:51.230625 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.230731 kubelet[2982]: E1216 04:10:51.230643 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.704000 audit[3458]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:51.704000 audit[3458]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffec775a300 a2=0 a3=7ffec775a2ec items=0 ppid=3120 pid=3458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:51.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:51.706000 audit[3458]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3458 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:10:51.706000 audit[3458]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffec775a300 a2=0 a3=0 items=0 ppid=3120 pid=3458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:51.706000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:10:51.742905 kubelet[2982]: E1216 04:10:51.738942 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.742905 kubelet[2982]: W1216 04:10:51.741281 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.742905 kubelet[2982]: E1216 04:10:51.741315 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.765515 kubelet[2982]: E1216 04:10:51.765475 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.768916 kubelet[2982]: W1216 04:10:51.767344 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.768916 kubelet[2982]: E1216 04:10:51.767413 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.770179 kubelet[2982]: E1216 04:10:51.770133 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.770179 kubelet[2982]: W1216 04:10:51.770160 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.770179 kubelet[2982]: E1216 04:10:51.770178 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.779099 kubelet[2982]: E1216 04:10:51.779050 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.779210 kubelet[2982]: W1216 04:10:51.779102 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.779210 kubelet[2982]: E1216 04:10:51.779132 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.779530 kubelet[2982]: E1216 04:10:51.779495 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.779530 kubelet[2982]: W1216 04:10:51.779518 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.779615 kubelet[2982]: E1216 04:10:51.779535 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:51.917909 kubelet[2982]: E1216 04:10:51.917382 2982 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Dec 16 04:10:51.917909 kubelet[2982]: E1216 04:10:51.917591 2982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e3fd0ea-a68f-4e5b-bfa5-1591b2cc7dcf-typha-certs podName:8e3fd0ea-a68f-4e5b-bfa5-1591b2cc7dcf nodeName:}" failed. No retries permitted until 2025-12-16 04:10:52.4175464 +0000 UTC m=+27.129289008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/8e3fd0ea-a68f-4e5b-bfa5-1591b2cc7dcf-typha-certs") pod "calico-typha-795c8588fc-4f2xg" (UID: "8e3fd0ea-a68f-4e5b-bfa5-1591b2cc7dcf") : failed to sync secret cache: timed out waiting for the condition Dec 16 04:10:51.919617 kubelet[2982]: E1216 04:10:51.919594 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:51.919728 kubelet[2982]: W1216 04:10:51.919703 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:51.919896 kubelet[2982]: E1216 04:10:51.919817 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:52.021626 kubelet[2982]: E1216 04:10:52.021484 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:52.021626 kubelet[2982]: W1216 04:10:52.021519 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:52.021626 kubelet[2982]: E1216 04:10:52.021547 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:52.059828 containerd[1631]: time="2025-12-16T04:10:52.059670338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zqj5q,Uid:1c7f48d4-539e-40a5-bb48-57ed04850cf8,Namespace:calico-system,Attempt:0,}" Dec 16 04:10:52.094357 containerd[1631]: time="2025-12-16T04:10:52.094208170Z" level=info msg="connecting to shim b14ba40474052be58c56d4e22f81abf0e363d81b36a87832bead22f760ace845" address="unix:///run/containerd/s/e12ee8d71d1d5eddf3d6cff44d22087b8118a8a1ade495869de2a831108fec4b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 04:10:52.122536 kubelet[2982]: E1216 04:10:52.122501 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:52.122536 kubelet[2982]: W1216 04:10:52.122529 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:52.122758 kubelet[2982]: E1216 04:10:52.122567 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:52.136651 systemd[1]: Started cri-containerd-b14ba40474052be58c56d4e22f81abf0e363d81b36a87832bead22f760ace845.scope - libcontainer container b14ba40474052be58c56d4e22f81abf0e363d81b36a87832bead22f760ace845. Dec 16 04:10:52.157000 audit: BPF prog-id=155 op=LOAD Dec 16 04:10:52.158000 audit: BPF prog-id=156 op=LOAD Dec 16 04:10:52.158000 audit[3487]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3477 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:52.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231346261343034373430353262653538633536643465323266383161 Dec 16 04:10:52.158000 audit: BPF prog-id=156 op=UNLOAD Dec 16 04:10:52.158000 audit[3487]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3477 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:52.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231346261343034373430353262653538633536643465323266383161 Dec 16 04:10:52.159000 audit: BPF prog-id=157 op=LOAD Dec 16 04:10:52.159000 audit[3487]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3477 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:52.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231346261343034373430353262653538633536643465323266383161 Dec 16 04:10:52.159000 audit: BPF prog-id=158 op=LOAD Dec 16 04:10:52.159000 audit[3487]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3477 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:52.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231346261343034373430353262653538633536643465323266383161 Dec 16 04:10:52.159000 audit: BPF prog-id=158 op=UNLOAD Dec 16 04:10:52.159000 audit[3487]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3477 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:52.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231346261343034373430353262653538633536643465323266383161 Dec 16 04:10:52.159000 audit: BPF prog-id=157 op=UNLOAD Dec 16 04:10:52.159000 audit[3487]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3477 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:52.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231346261343034373430353262653538633536643465323266383161 Dec 16 04:10:52.160000 audit: BPF prog-id=159 op=LOAD Dec 16 04:10:52.160000 audit[3487]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3477 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:52.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231346261343034373430353262653538633536643465323266383161 Dec 16 04:10:52.189308 containerd[1631]: time="2025-12-16T04:10:52.189261802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zqj5q,Uid:1c7f48d4-539e-40a5-bb48-57ed04850cf8,Namespace:calico-system,Attempt:0,} returns sandbox id \"b14ba40474052be58c56d4e22f81abf0e363d81b36a87832bead22f760ace845\"" Dec 16 04:10:52.191815 containerd[1631]: time="2025-12-16T04:10:52.191318414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 04:10:52.224309 kubelet[2982]: E1216 04:10:52.224263 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:52.224309 kubelet[2982]: W1216 04:10:52.224295 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:52.224538 kubelet[2982]: E1216 04:10:52.224341 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:52.325779 kubelet[2982]: E1216 04:10:52.325639 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:52.325779 kubelet[2982]: W1216 04:10:52.325695 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:52.325779 kubelet[2982]: E1216 04:10:52.325725 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:52.427668 kubelet[2982]: E1216 04:10:52.427390 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:52.427668 kubelet[2982]: W1216 04:10:52.427425 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:52.427668 kubelet[2982]: E1216 04:10:52.427452 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:52.428270 kubelet[2982]: E1216 04:10:52.427991 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:52.428397 kubelet[2982]: W1216 04:10:52.428357 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:52.429405 kubelet[2982]: E1216 04:10:52.428558 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:52.429930 kubelet[2982]: E1216 04:10:52.429767 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:52.429930 kubelet[2982]: W1216 04:10:52.429787 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:52.429930 kubelet[2982]: E1216 04:10:52.429805 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:52.430191 kubelet[2982]: E1216 04:10:52.430172 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:52.430291 kubelet[2982]: W1216 04:10:52.430271 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:52.430702 kubelet[2982]: E1216 04:10:52.430362 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:52.431421 kubelet[2982]: E1216 04:10:52.431091 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:52.431421 kubelet[2982]: W1216 04:10:52.431112 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:52.431421 kubelet[2982]: E1216 04:10:52.431129 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:52.444963 kubelet[2982]: E1216 04:10:52.444600 2982 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 04:10:52.444963 kubelet[2982]: W1216 04:10:52.444771 2982 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 04:10:52.444963 kubelet[2982]: E1216 04:10:52.444799 2982 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 04:10:52.483794 containerd[1631]: time="2025-12-16T04:10:52.483264510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-795c8588fc-4f2xg,Uid:8e3fd0ea-a68f-4e5b-bfa5-1591b2cc7dcf,Namespace:calico-system,Attempt:0,}" Dec 16 04:10:52.492798 kubelet[2982]: E1216 04:10:52.492056 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:10:52.539990 containerd[1631]: time="2025-12-16T04:10:52.539255055Z" level=info msg="connecting to shim f836fdd2f0512a0c0ac9b787f617c9dd389c7dcef454d640c301539c9b05858b" address="unix:///run/containerd/s/0a8f9c166d53fbf3f30d449317821a362f69fa6f8afc88499d3dcf9c39037a1c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 04:10:52.598689 systemd[1]: Started cri-containerd-f836fdd2f0512a0c0ac9b787f617c9dd389c7dcef454d640c301539c9b05858b.scope - libcontainer container f836fdd2f0512a0c0ac9b787f617c9dd389c7dcef454d640c301539c9b05858b. Dec 16 04:10:52.655000 audit: BPF prog-id=160 op=LOAD Dec 16 04:10:52.656000 audit: BPF prog-id=161 op=LOAD Dec 16 04:10:52.656000 audit[3544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3533 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:52.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638333666646432663035313261306330616339623738376636313763 Dec 16 04:10:52.657000 audit: BPF prog-id=161 op=UNLOAD Dec 16 04:10:52.657000 audit[3544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3533 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:52.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638333666646432663035313261306330616339623738376636313763 Dec 16 04:10:52.657000 audit: BPF prog-id=162 op=LOAD Dec 16 04:10:52.657000 audit[3544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3533 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:52.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638333666646432663035313261306330616339623738376636313763 Dec 16 04:10:52.657000 audit: BPF prog-id=163 op=LOAD Dec 16 04:10:52.657000 audit[3544]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3533 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:52.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638333666646432663035313261306330616339623738376636313763 Dec 16 04:10:52.657000 audit: BPF prog-id=163 op=UNLOAD Dec 16 04:10:52.657000 audit[3544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3533 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:52.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638333666646432663035313261306330616339623738376636313763 Dec 16 04:10:52.657000 audit: BPF prog-id=162 op=UNLOAD Dec 16 04:10:52.657000 audit[3544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3533 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:52.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638333666646432663035313261306330616339623738376636313763 Dec 16 04:10:52.658000 audit: BPF prog-id=164 op=LOAD Dec 16 04:10:52.658000 audit[3544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3533 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:52.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638333666646432663035313261306330616339623738376636313763 Dec 16 04:10:52.790252 containerd[1631]: time="2025-12-16T04:10:52.790094860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-795c8588fc-4f2xg,Uid:8e3fd0ea-a68f-4e5b-bfa5-1591b2cc7dcf,Namespace:calico-system,Attempt:0,} returns sandbox id \"f836fdd2f0512a0c0ac9b787f617c9dd389c7dcef454d640c301539c9b05858b\"" Dec 16 04:10:54.492650 kubelet[2982]: E1216 04:10:54.492572 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:10:55.510115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount533996235.mount: Deactivated successfully. Dec 16 04:10:56.297493 containerd[1631]: time="2025-12-16T04:10:56.296730445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:10:56.329547 containerd[1631]: time="2025-12-16T04:10:56.329479373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5938056" Dec 16 04:10:56.344237 containerd[1631]: time="2025-12-16T04:10:56.344170475Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:10:56.371613 containerd[1631]: time="2025-12-16T04:10:56.362299567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:10:56.371613 containerd[1631]: time="2025-12-16T04:10:56.366440755Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 4.174500195s" Dec 16 04:10:56.371613 containerd[1631]: time="2025-12-16T04:10:56.366480847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 04:10:56.371613 containerd[1631]: time="2025-12-16T04:10:56.368736850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 04:10:56.371913 containerd[1631]: time="2025-12-16T04:10:56.371800958Z" level=info msg="CreateContainer within sandbox \"b14ba40474052be58c56d4e22f81abf0e363d81b36a87832bead22f760ace845\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 04:10:56.391455 containerd[1631]: time="2025-12-16T04:10:56.391399238Z" level=info msg="Container 09366694b96fc0e29accdc28a76bd0cb376ea9951b55fc3e2af79ffb36ad164e: CDI devices from CRI Config.CDIDevices: []" Dec 16 04:10:56.404600 containerd[1631]: time="2025-12-16T04:10:56.404521758Z" level=info msg="CreateContainer within sandbox \"b14ba40474052be58c56d4e22f81abf0e363d81b36a87832bead22f760ace845\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"09366694b96fc0e29accdc28a76bd0cb376ea9951b55fc3e2af79ffb36ad164e\"" Dec 16 04:10:56.405403 containerd[1631]: time="2025-12-16T04:10:56.405337918Z" level=info msg="StartContainer for \"09366694b96fc0e29accdc28a76bd0cb376ea9951b55fc3e2af79ffb36ad164e\"" Dec 16 04:10:56.409359 containerd[1631]: time="2025-12-16T04:10:56.409266553Z" level=info msg="connecting to shim 09366694b96fc0e29accdc28a76bd0cb376ea9951b55fc3e2af79ffb36ad164e" address="unix:///run/containerd/s/e12ee8d71d1d5eddf3d6cff44d22087b8118a8a1ade495869de2a831108fec4b" protocol=ttrpc version=3 Dec 16 04:10:56.457695 systemd[1]: Started cri-containerd-09366694b96fc0e29accdc28a76bd0cb376ea9951b55fc3e2af79ffb36ad164e.scope - libcontainer container 09366694b96fc0e29accdc28a76bd0cb376ea9951b55fc3e2af79ffb36ad164e. Dec 16 04:10:56.493398 kubelet[2982]: E1216 04:10:56.493290 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:10:56.601244 kernel: kauditd_printk_skb: 58 callbacks suppressed Dec 16 04:10:56.601731 kernel: audit: type=1334 audit(1765858256.594:560): prog-id=165 op=LOAD Dec 16 04:10:56.594000 audit: BPF prog-id=165 op=LOAD Dec 16 04:10:56.608663 kernel: audit: type=1300 audit(1765858256.594:560): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=3477 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:56.594000 audit[3579]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=3477 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:56.616489 kernel: audit: type=1327 audit(1765858256.594:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333636363934623936666330653239616363646332386137366264 Dec 16 04:10:56.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333636363934623936666330653239616363646332386137366264 Dec 16 04:10:56.594000 audit: BPF prog-id=166 op=LOAD Dec 16 04:10:56.594000 audit[3579]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=3477 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:56.620972 kernel: audit: type=1334 audit(1765858256.594:561): prog-id=166 op=LOAD Dec 16 04:10:56.621065 kernel: audit: type=1300 audit(1765858256.594:561): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=3477 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:56.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333636363934623936666330653239616363646332386137366264 Dec 16 04:10:56.595000 audit: BPF prog-id=166 op=UNLOAD Dec 16 04:10:56.630964 kernel: audit: type=1327 audit(1765858256.594:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333636363934623936666330653239616363646332386137366264 Dec 16 04:10:56.631126 kernel: audit: type=1334 audit(1765858256.595:562): prog-id=166 op=UNLOAD Dec 16 04:10:56.631169 kernel: audit: type=1300 audit(1765858256.595:562): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3477 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:56.595000 audit[3579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3477 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:56.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333636363934623936666330653239616363646332386137366264 Dec 16 04:10:56.637868 kernel: audit: type=1327 audit(1765858256.595:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333636363934623936666330653239616363646332386137366264 Dec 16 04:10:56.595000 audit: BPF prog-id=165 op=UNLOAD Dec 16 04:10:56.648452 kernel: audit: type=1334 audit(1765858256.595:563): prog-id=165 op=UNLOAD Dec 16 04:10:56.595000 audit[3579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3477 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:56.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333636363934623936666330653239616363646332386137366264 Dec 16 04:10:56.595000 audit: BPF prog-id=167 op=LOAD Dec 16 04:10:56.595000 audit[3579]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=3477 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:10:56.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039333636363934623936666330653239616363646332386137366264 Dec 16 04:10:56.688398 containerd[1631]: time="2025-12-16T04:10:56.688322034Z" level=info msg="StartContainer for \"09366694b96fc0e29accdc28a76bd0cb376ea9951b55fc3e2af79ffb36ad164e\" returns successfully" Dec 16 04:10:56.718855 systemd[1]: cri-containerd-09366694b96fc0e29accdc28a76bd0cb376ea9951b55fc3e2af79ffb36ad164e.scope: Deactivated successfully. Dec 16 04:10:56.721000 audit: BPF prog-id=167 op=UNLOAD Dec 16 04:10:56.719752 systemd[1]: cri-containerd-09366694b96fc0e29accdc28a76bd0cb376ea9951b55fc3e2af79ffb36ad164e.scope: Consumed 84ms CPU time, 6.8M memory peak, 3.2M read from disk. Dec 16 04:10:56.794390 containerd[1631]: time="2025-12-16T04:10:56.792947657Z" level=info msg="received container exit event container_id:\"09366694b96fc0e29accdc28a76bd0cb376ea9951b55fc3e2af79ffb36ad164e\" id:\"09366694b96fc0e29accdc28a76bd0cb376ea9951b55fc3e2af79ffb36ad164e\" pid:3592 exited_at:{seconds:1765858256 nanos:727115802}" Dec 16 04:10:56.836644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09366694b96fc0e29accdc28a76bd0cb376ea9951b55fc3e2af79ffb36ad164e-rootfs.mount: Deactivated successfully. Dec 16 04:10:58.492090 kubelet[2982]: E1216 04:10:58.491900 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:11:00.492236 kubelet[2982]: E1216 04:11:00.492169 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:11:01.186525 containerd[1631]: time="2025-12-16T04:11:01.186430139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:11:01.188777 containerd[1631]: time="2025-12-16T04:11:01.188556102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 04:11:01.190503 containerd[1631]: time="2025-12-16T04:11:01.190290682Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:11:01.195071 containerd[1631]: time="2025-12-16T04:11:01.194920994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:11:01.197873 containerd[1631]: time="2025-12-16T04:11:01.197822110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.829050155s" Dec 16 04:11:01.197957 containerd[1631]: time="2025-12-16T04:11:01.197878947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 04:11:01.200903 containerd[1631]: time="2025-12-16T04:11:01.200834593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 04:11:01.239507 containerd[1631]: time="2025-12-16T04:11:01.239450026Z" level=info msg="CreateContainer within sandbox \"f836fdd2f0512a0c0ac9b787f617c9dd389c7dcef454d640c301539c9b05858b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 04:11:01.252361 containerd[1631]: time="2025-12-16T04:11:01.250732718Z" level=info msg="Container f03d594234d5d8099a48451e0cb19b3bd7db0543018bebe69f98d3caae3bbbea: CDI devices from CRI Config.CDIDevices: []" Dec 16 04:11:01.260529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount247571076.mount: Deactivated successfully. Dec 16 04:11:01.268416 containerd[1631]: time="2025-12-16T04:11:01.266846385Z" level=info msg="CreateContainer within sandbox \"f836fdd2f0512a0c0ac9b787f617c9dd389c7dcef454d640c301539c9b05858b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f03d594234d5d8099a48451e0cb19b3bd7db0543018bebe69f98d3caae3bbbea\"" Dec 16 04:11:01.269411 containerd[1631]: time="2025-12-16T04:11:01.269364272Z" level=info msg="StartContainer for \"f03d594234d5d8099a48451e0cb19b3bd7db0543018bebe69f98d3caae3bbbea\"" Dec 16 04:11:01.271841 containerd[1631]: time="2025-12-16T04:11:01.271809043Z" level=info msg="connecting to shim f03d594234d5d8099a48451e0cb19b3bd7db0543018bebe69f98d3caae3bbbea" address="unix:///run/containerd/s/0a8f9c166d53fbf3f30d449317821a362f69fa6f8afc88499d3dcf9c39037a1c" protocol=ttrpc version=3 Dec 16 04:11:01.339705 systemd[1]: Started cri-containerd-f03d594234d5d8099a48451e0cb19b3bd7db0543018bebe69f98d3caae3bbbea.scope - libcontainer container f03d594234d5d8099a48451e0cb19b3bd7db0543018bebe69f98d3caae3bbbea. Dec 16 04:11:01.371000 audit: BPF prog-id=168 op=LOAD Dec 16 04:11:01.372000 audit: BPF prog-id=169 op=LOAD Dec 16 04:11:01.372000 audit[3636]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3533 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:01.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630336435393432333464356438303939613438343531653063623139 Dec 16 04:11:01.372000 audit: BPF prog-id=169 op=UNLOAD Dec 16 04:11:01.372000 audit[3636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3533 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:01.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630336435393432333464356438303939613438343531653063623139 Dec 16 04:11:01.372000 audit: BPF prog-id=170 op=LOAD Dec 16 04:11:01.372000 audit[3636]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3533 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:01.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630336435393432333464356438303939613438343531653063623139 Dec 16 04:11:01.372000 audit: BPF prog-id=171 op=LOAD Dec 16 04:11:01.372000 audit[3636]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3533 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:01.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630336435393432333464356438303939613438343531653063623139 Dec 16 04:11:01.373000 audit: BPF prog-id=171 op=UNLOAD Dec 16 04:11:01.373000 audit[3636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3533 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:01.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630336435393432333464356438303939613438343531653063623139 Dec 16 04:11:01.373000 audit: BPF prog-id=170 op=UNLOAD Dec 16 04:11:01.373000 audit[3636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3533 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:01.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630336435393432333464356438303939613438343531653063623139 Dec 16 04:11:01.373000 audit: BPF prog-id=172 op=LOAD Dec 16 04:11:01.373000 audit[3636]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3533 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:01.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630336435393432333464356438303939613438343531653063623139 Dec 16 04:11:01.429084 containerd[1631]: time="2025-12-16T04:11:01.428966322Z" level=info msg="StartContainer for \"f03d594234d5d8099a48451e0cb19b3bd7db0543018bebe69f98d3caae3bbbea\" returns successfully" Dec 16 04:11:02.492214 kubelet[2982]: E1216 04:11:02.491988 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:11:02.741401 kubelet[2982]: I1216 04:11:02.741090 2982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 04:11:04.492951 kubelet[2982]: E1216 04:11:04.492426 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:11:06.493854 kubelet[2982]: E1216 04:11:06.493744 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:11:07.442082 containerd[1631]: time="2025-12-16T04:11:07.441994698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:11:07.445412 containerd[1631]: time="2025-12-16T04:11:07.445336751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 04:11:07.451915 containerd[1631]: time="2025-12-16T04:11:07.451810555Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:11:07.475576 containerd[1631]: time="2025-12-16T04:11:07.475443080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:11:07.476824 containerd[1631]: time="2025-12-16T04:11:07.476645582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.275726827s" Dec 16 04:11:07.476824 containerd[1631]: time="2025-12-16T04:11:07.476687750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 04:11:07.481904 containerd[1631]: time="2025-12-16T04:11:07.481862607Z" level=info msg="CreateContainer within sandbox \"b14ba40474052be58c56d4e22f81abf0e363d81b36a87832bead22f760ace845\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 04:11:07.501778 containerd[1631]: time="2025-12-16T04:11:07.500525893Z" level=info msg="Container 717d2fc78df28f0d10a5a0f62d668af82f4674105629eaa48e5bfcb5984ee0f9: CDI devices from CRI Config.CDIDevices: []" Dec 16 04:11:07.505926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount25184518.mount: Deactivated successfully. Dec 16 04:11:07.521884 containerd[1631]: time="2025-12-16T04:11:07.521828882Z" level=info msg="CreateContainer within sandbox \"b14ba40474052be58c56d4e22f81abf0e363d81b36a87832bead22f760ace845\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"717d2fc78df28f0d10a5a0f62d668af82f4674105629eaa48e5bfcb5984ee0f9\"" Dec 16 04:11:07.523568 containerd[1631]: time="2025-12-16T04:11:07.523423505Z" level=info msg="StartContainer for \"717d2fc78df28f0d10a5a0f62d668af82f4674105629eaa48e5bfcb5984ee0f9\"" Dec 16 04:11:07.527084 containerd[1631]: time="2025-12-16T04:11:07.527043245Z" level=info msg="connecting to shim 717d2fc78df28f0d10a5a0f62d668af82f4674105629eaa48e5bfcb5984ee0f9" address="unix:///run/containerd/s/e12ee8d71d1d5eddf3d6cff44d22087b8118a8a1ade495869de2a831108fec4b" protocol=ttrpc version=3 Dec 16 04:11:07.604858 systemd[1]: Started cri-containerd-717d2fc78df28f0d10a5a0f62d668af82f4674105629eaa48e5bfcb5984ee0f9.scope - libcontainer container 717d2fc78df28f0d10a5a0f62d668af82f4674105629eaa48e5bfcb5984ee0f9. Dec 16 04:11:07.713892 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 16 04:11:07.714998 kernel: audit: type=1334 audit(1765858267.702:574): prog-id=173 op=LOAD Dec 16 04:11:07.715120 kernel: audit: type=1300 audit(1765858267.702:574): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3477 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:07.702000 audit: BPF prog-id=173 op=LOAD Dec 16 04:11:07.702000 audit[3682]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3477 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:07.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731376432666337386466323866306431306135613066363264363638 Dec 16 04:11:07.721017 kernel: audit: type=1327 audit(1765858267.702:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731376432666337386466323866306431306135613066363264363638 Dec 16 04:11:07.724728 kernel: audit: type=1334 audit(1765858267.702:575): prog-id=174 op=LOAD Dec 16 04:11:07.702000 audit: BPF prog-id=174 op=LOAD Dec 16 04:11:07.702000 audit[3682]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3477 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:07.727371 kernel: audit: type=1300 audit(1765858267.702:575): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3477 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:07.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731376432666337386466323866306431306135613066363264363638 Dec 16 04:11:07.732654 kernel: audit: type=1327 audit(1765858267.702:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731376432666337386466323866306431306135613066363264363638 Dec 16 04:11:07.711000 audit: BPF prog-id=174 op=UNLOAD Dec 16 04:11:07.736596 kernel: audit: type=1334 audit(1765858267.711:576): prog-id=174 op=UNLOAD Dec 16 04:11:07.711000 audit[3682]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3477 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:07.739295 kernel: audit: type=1300 audit(1765858267.711:576): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3477 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:07.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731376432666337386466323866306431306135613066363264363638 Dec 16 04:11:07.748424 kernel: audit: type=1327 audit(1765858267.711:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731376432666337386466323866306431306135613066363264363638 Dec 16 04:11:07.711000 audit: BPF prog-id=173 op=UNLOAD Dec 16 04:11:07.754595 kernel: audit: type=1334 audit(1765858267.711:577): prog-id=173 op=UNLOAD Dec 16 04:11:07.711000 audit[3682]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3477 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:07.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731376432666337386466323866306431306135613066363264363638 Dec 16 04:11:07.711000 audit: BPF prog-id=175 op=LOAD Dec 16 04:11:07.711000 audit[3682]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3477 pid=3682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:07.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731376432666337386466323866306431306135613066363264363638 Dec 16 04:11:07.802409 containerd[1631]: time="2025-12-16T04:11:07.802306382Z" level=info msg="StartContainer for \"717d2fc78df28f0d10a5a0f62d668af82f4674105629eaa48e5bfcb5984ee0f9\" returns successfully" Dec 16 04:11:08.502574 kubelet[2982]: E1216 04:11:08.501480 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:11:08.908271 kubelet[2982]: I1216 04:11:08.907545 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-795c8588fc-4f2xg" podStartSLOduration=10.499483306 podStartE2EDuration="18.907488778s" podCreationTimestamp="2025-12-16 04:10:50 +0000 UTC" firstStartedPulling="2025-12-16 04:10:52.792360865 +0000 UTC m=+27.504103472" lastFinishedPulling="2025-12-16 04:11:01.200366327 +0000 UTC m=+35.912108944" observedRunningTime="2025-12-16 04:11:01.762980859 +0000 UTC m=+36.474723486" watchObservedRunningTime="2025-12-16 04:11:08.907488778 +0000 UTC m=+43.619231396" Dec 16 04:11:09.270000 audit: BPF prog-id=175 op=UNLOAD Dec 16 04:11:09.267079 systemd[1]: cri-containerd-717d2fc78df28f0d10a5a0f62d668af82f4674105629eaa48e5bfcb5984ee0f9.scope: Deactivated successfully. Dec 16 04:11:09.267853 systemd[1]: cri-containerd-717d2fc78df28f0d10a5a0f62d668af82f4674105629eaa48e5bfcb5984ee0f9.scope: Consumed 970ms CPU time, 165.4M memory peak, 6.1M read from disk, 171.3M written to disk. Dec 16 04:11:09.310536 containerd[1631]: time="2025-12-16T04:11:09.310446277Z" level=info msg="received container exit event container_id:\"717d2fc78df28f0d10a5a0f62d668af82f4674105629eaa48e5bfcb5984ee0f9\" id:\"717d2fc78df28f0d10a5a0f62d668af82f4674105629eaa48e5bfcb5984ee0f9\" pid:3695 exited_at:{seconds:1765858269 nanos:292660451}" Dec 16 04:11:09.403754 kubelet[2982]: I1216 04:11:09.403696 2982 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 04:11:09.417143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-717d2fc78df28f0d10a5a0f62d668af82f4674105629eaa48e5bfcb5984ee0f9-rootfs.mount: Deactivated successfully. Dec 16 04:11:09.546717 systemd[1]: Created slice kubepods-besteffort-pod0a58bc2a_4639_4181_980c_f9b5b1855f06.slice - libcontainer container kubepods-besteffort-pod0a58bc2a_4639_4181_980c_f9b5b1855f06.slice. Dec 16 04:11:09.585221 systemd[1]: Created slice kubepods-besteffort-pod2b2fbc29_627a_4636_910d_2ada1caf4c64.slice - libcontainer container kubepods-besteffort-pod2b2fbc29_627a_4636_910d_2ada1caf4c64.slice. Dec 16 04:11:09.607713 systemd[1]: Created slice kubepods-burstable-pod9f394ecb_5814_4876_9d24_cba0fe4360b7.slice - libcontainer container kubepods-burstable-pod9f394ecb_5814_4876_9d24_cba0fe4360b7.slice. Dec 16 04:11:09.628767 systemd[1]: Created slice kubepods-burstable-podd53735f5_b411_4ae8_af6c_2e5709e7684a.slice - libcontainer container kubepods-burstable-podd53735f5_b411_4ae8_af6c_2e5709e7684a.slice. Dec 16 04:11:09.642665 systemd[1]: Created slice kubepods-besteffort-pod698ea2f4_6c38_4f29_af10_d89d447f19d4.slice - libcontainer container kubepods-besteffort-pod698ea2f4_6c38_4f29_af10_d89d447f19d4.slice. Dec 16 04:11:09.655024 kubelet[2982]: I1216 04:11:09.654961 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bad388bf-fcef-4c56-88ec-bd97ca364c03-config\") pod \"goldmane-666569f655-s8rh7\" (UID: \"bad388bf-fcef-4c56-88ec-bd97ca364c03\") " pod="calico-system/goldmane-666569f655-s8rh7" Dec 16 04:11:09.657900 kubelet[2982]: I1216 04:11:09.655661 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68htt\" (UniqueName: \"kubernetes.io/projected/d53735f5-b411-4ae8-af6c-2e5709e7684a-kube-api-access-68htt\") pod \"coredns-668d6bf9bc-knj9k\" (UID: \"d53735f5-b411-4ae8-af6c-2e5709e7684a\") " pod="kube-system/coredns-668d6bf9bc-knj9k" Dec 16 04:11:09.657900 kubelet[2982]: I1216 04:11:09.655716 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1-calico-apiserver-certs\") pod \"calico-apiserver-5887594b4-svthm\" (UID: \"ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1\") " pod="calico-apiserver/calico-apiserver-5887594b4-svthm" Dec 16 04:11:09.657900 kubelet[2982]: I1216 04:11:09.655747 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bad388bf-fcef-4c56-88ec-bd97ca364c03-goldmane-ca-bundle\") pod \"goldmane-666569f655-s8rh7\" (UID: \"bad388bf-fcef-4c56-88ec-bd97ca364c03\") " pod="calico-system/goldmane-666569f655-s8rh7" Dec 16 04:11:09.657900 kubelet[2982]: I1216 04:11:09.655778 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d53735f5-b411-4ae8-af6c-2e5709e7684a-config-volume\") pod \"coredns-668d6bf9bc-knj9k\" (UID: \"d53735f5-b411-4ae8-af6c-2e5709e7684a\") " pod="kube-system/coredns-668d6bf9bc-knj9k" Dec 16 04:11:09.657900 kubelet[2982]: I1216 04:11:09.655814 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/698ea2f4-6c38-4f29-af10-d89d447f19d4-tigera-ca-bundle\") pod \"calico-kube-controllers-6665678475-rs6tq\" (UID: \"698ea2f4-6c38-4f29-af10-d89d447f19d4\") " pod="calico-system/calico-kube-controllers-6665678475-rs6tq" Dec 16 04:11:09.658302 kubelet[2982]: I1216 04:11:09.655858 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch2vc\" (UniqueName: \"kubernetes.io/projected/698ea2f4-6c38-4f29-af10-d89d447f19d4-kube-api-access-ch2vc\") pod \"calico-kube-controllers-6665678475-rs6tq\" (UID: \"698ea2f4-6c38-4f29-af10-d89d447f19d4\") " pod="calico-system/calico-kube-controllers-6665678475-rs6tq" Dec 16 04:11:09.658302 kubelet[2982]: I1216 04:11:09.655889 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bad388bf-fcef-4c56-88ec-bd97ca364c03-goldmane-key-pair\") pod \"goldmane-666569f655-s8rh7\" (UID: \"bad388bf-fcef-4c56-88ec-bd97ca364c03\") " pod="calico-system/goldmane-666569f655-s8rh7" Dec 16 04:11:09.658302 kubelet[2982]: I1216 04:11:09.655927 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2b2fbc29-627a-4636-910d-2ada1caf4c64-calico-apiserver-certs\") pod \"calico-apiserver-5887594b4-5fm6l\" (UID: \"2b2fbc29-627a-4636-910d-2ada1caf4c64\") " pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" Dec 16 04:11:09.658302 kubelet[2982]: I1216 04:11:09.655961 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a58bc2a-4639-4181-980c-f9b5b1855f06-whisker-ca-bundle\") pod \"whisker-df4d97596-pnqrb\" (UID: \"0a58bc2a-4639-4181-980c-f9b5b1855f06\") " pod="calico-system/whisker-df4d97596-pnqrb" Dec 16 04:11:09.658302 kubelet[2982]: I1216 04:11:09.656008 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqvvx\" (UniqueName: \"kubernetes.io/projected/ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1-kube-api-access-cqvvx\") pod \"calico-apiserver-5887594b4-svthm\" (UID: \"ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1\") " pod="calico-apiserver/calico-apiserver-5887594b4-svthm" Dec 16 04:11:09.659008 kubelet[2982]: I1216 04:11:09.656042 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmrlg\" (UniqueName: \"kubernetes.io/projected/bad388bf-fcef-4c56-88ec-bd97ca364c03-kube-api-access-xmrlg\") pod \"goldmane-666569f655-s8rh7\" (UID: \"bad388bf-fcef-4c56-88ec-bd97ca364c03\") " pod="calico-system/goldmane-666569f655-s8rh7" Dec 16 04:11:09.659008 kubelet[2982]: I1216 04:11:09.656070 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f394ecb-5814-4876-9d24-cba0fe4360b7-config-volume\") pod \"coredns-668d6bf9bc-fjqnd\" (UID: \"9f394ecb-5814-4876-9d24-cba0fe4360b7\") " pod="kube-system/coredns-668d6bf9bc-fjqnd" Dec 16 04:11:09.659008 kubelet[2982]: I1216 04:11:09.656100 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmlpb\" (UniqueName: \"kubernetes.io/projected/0a58bc2a-4639-4181-980c-f9b5b1855f06-kube-api-access-vmlpb\") pod \"whisker-df4d97596-pnqrb\" (UID: \"0a58bc2a-4639-4181-980c-f9b5b1855f06\") " pod="calico-system/whisker-df4d97596-pnqrb" Dec 16 04:11:09.659008 kubelet[2982]: I1216 04:11:09.657351 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0a58bc2a-4639-4181-980c-f9b5b1855f06-whisker-backend-key-pair\") pod \"whisker-df4d97596-pnqrb\" (UID: \"0a58bc2a-4639-4181-980c-f9b5b1855f06\") " pod="calico-system/whisker-df4d97596-pnqrb" Dec 16 04:11:09.659008 kubelet[2982]: I1216 04:11:09.657445 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdgpk\" (UniqueName: \"kubernetes.io/projected/2b2fbc29-627a-4636-910d-2ada1caf4c64-kube-api-access-fdgpk\") pod \"calico-apiserver-5887594b4-5fm6l\" (UID: \"2b2fbc29-627a-4636-910d-2ada1caf4c64\") " pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" Dec 16 04:11:09.659230 kubelet[2982]: I1216 04:11:09.657508 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slsgb\" (UniqueName: \"kubernetes.io/projected/9f394ecb-5814-4876-9d24-cba0fe4360b7-kube-api-access-slsgb\") pod \"coredns-668d6bf9bc-fjqnd\" (UID: \"9f394ecb-5814-4876-9d24-cba0fe4360b7\") " pod="kube-system/coredns-668d6bf9bc-fjqnd" Dec 16 04:11:09.663362 systemd[1]: Created slice kubepods-besteffort-podbad388bf_fcef_4c56_88ec_bd97ca364c03.slice - libcontainer container kubepods-besteffort-podbad388bf_fcef_4c56_88ec_bd97ca364c03.slice. Dec 16 04:11:09.682260 systemd[1]: Created slice kubepods-besteffort-podff1b9f91_a1b4_4d4a_995d_2e8c1ad4d2e1.slice - libcontainer container kubepods-besteffort-podff1b9f91_a1b4_4d4a_995d_2e8c1ad4d2e1.slice. Dec 16 04:11:09.879466 containerd[1631]: time="2025-12-16T04:11:09.879054432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-df4d97596-pnqrb,Uid:0a58bc2a-4639-4181-980c-f9b5b1855f06,Namespace:calico-system,Attempt:0,}" Dec 16 04:11:09.891412 containerd[1631]: time="2025-12-16T04:11:09.887123987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 04:11:09.901224 containerd[1631]: time="2025-12-16T04:11:09.900984698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5887594b4-5fm6l,Uid:2b2fbc29-627a-4636-910d-2ada1caf4c64,Namespace:calico-apiserver,Attempt:0,}" Dec 16 04:11:09.943022 containerd[1631]: time="2025-12-16T04:11:09.942939614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-knj9k,Uid:d53735f5-b411-4ae8-af6c-2e5709e7684a,Namespace:kube-system,Attempt:0,}" Dec 16 04:11:09.958420 containerd[1631]: time="2025-12-16T04:11:09.957659212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6665678475-rs6tq,Uid:698ea2f4-6c38-4f29-af10-d89d447f19d4,Namespace:calico-system,Attempt:0,}" Dec 16 04:11:10.012128 containerd[1631]: time="2025-12-16T04:11:10.012023967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5887594b4-svthm,Uid:ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1,Namespace:calico-apiserver,Attempt:0,}" Dec 16 04:11:10.033186 containerd[1631]: time="2025-12-16T04:11:10.033026439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s8rh7,Uid:bad388bf-fcef-4c56-88ec-bd97ca364c03,Namespace:calico-system,Attempt:0,}" Dec 16 04:11:10.222949 containerd[1631]: time="2025-12-16T04:11:10.222583997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fjqnd,Uid:9f394ecb-5814-4876-9d24-cba0fe4360b7,Namespace:kube-system,Attempt:0,}" Dec 16 04:11:10.507264 systemd[1]: Created slice kubepods-besteffort-podd6d2249c_912c_448c_8aa3_089c6b8243d1.slice - libcontainer container kubepods-besteffort-podd6d2249c_912c_448c_8aa3_089c6b8243d1.slice. Dec 16 04:11:10.570217 containerd[1631]: time="2025-12-16T04:11:10.569967154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cd92j,Uid:d6d2249c-912c-448c-8aa3-089c6b8243d1,Namespace:calico-system,Attempt:0,}" Dec 16 04:11:10.774230 containerd[1631]: time="2025-12-16T04:11:10.773621418Z" level=error msg="Failed to destroy network for sandbox \"18acc96f14781f4cc7c5a376fe75cf1155954fc790ddfd382f273d6856def8a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:10.778154 systemd[1]: run-netns-cni\x2dcb9c5fb6\x2d1224\x2d66e2\x2d079d\x2d1d736b456594.mount: Deactivated successfully. Dec 16 04:11:10.782580 containerd[1631]: time="2025-12-16T04:11:10.782514825Z" level=error msg="Failed to destroy network for sandbox \"97884d14499e3fb501e7e285216bc2a03f737fe769efc6fe2f81e6669f8bd080\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:10.787643 systemd[1]: run-netns-cni\x2dd50ab663\x2daeb5\x2d9fbd\x2d5693\x2d9165604af21f.mount: Deactivated successfully. Dec 16 04:11:10.814045 containerd[1631]: time="2025-12-16T04:11:10.798801161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-df4d97596-pnqrb,Uid:0a58bc2a-4639-4181-980c-f9b5b1855f06,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97884d14499e3fb501e7e285216bc2a03f737fe769efc6fe2f81e6669f8bd080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:10.819461 containerd[1631]: time="2025-12-16T04:11:10.801335341Z" level=error msg="Failed to destroy network for sandbox \"6265b49d59ffd7d3de6e2f5f75e77909aef593dbe5211d143063e17765b0c522\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:10.820562 containerd[1631]: time="2025-12-16T04:11:10.808349420Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-knj9k,Uid:d53735f5-b411-4ae8-af6c-2e5709e7684a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"18acc96f14781f4cc7c5a376fe75cf1155954fc790ddfd382f273d6856def8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:10.827411 containerd[1631]: time="2025-12-16T04:11:10.826602882Z" level=error msg="Failed to destroy network for sandbox \"32ac99c34c20ff3ea9a16aa5600d12c3abc51403eb0708a7e7920bc22f332db3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:10.831618 kubelet[2982]: E1216 04:11:10.831536 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18acc96f14781f4cc7c5a376fe75cf1155954fc790ddfd382f273d6856def8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:10.832503 kubelet[2982]: E1216 04:11:10.832432 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18acc96f14781f4cc7c5a376fe75cf1155954fc790ddfd382f273d6856def8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-knj9k" Dec 16 04:11:10.832948 kubelet[2982]: E1216 04:11:10.831810 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97884d14499e3fb501e7e285216bc2a03f737fe769efc6fe2f81e6669f8bd080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:10.832948 kubelet[2982]: E1216 04:11:10.832818 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18acc96f14781f4cc7c5a376fe75cf1155954fc790ddfd382f273d6856def8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-knj9k" Dec 16 04:11:10.833092 kubelet[2982]: E1216 04:11:10.832843 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97884d14499e3fb501e7e285216bc2a03f737fe769efc6fe2f81e6669f8bd080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-df4d97596-pnqrb" Dec 16 04:11:10.833092 kubelet[2982]: E1216 04:11:10.832998 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97884d14499e3fb501e7e285216bc2a03f737fe769efc6fe2f81e6669f8bd080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-df4d97596-pnqrb" Dec 16 04:11:10.833092 kubelet[2982]: E1216 04:11:10.833064 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-df4d97596-pnqrb_calico-system(0a58bc2a-4639-4181-980c-f9b5b1855f06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-df4d97596-pnqrb_calico-system(0a58bc2a-4639-4181-980c-f9b5b1855f06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97884d14499e3fb501e7e285216bc2a03f737fe769efc6fe2f81e6669f8bd080\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-df4d97596-pnqrb" podUID="0a58bc2a-4639-4181-980c-f9b5b1855f06" Dec 16 04:11:10.834257 kubelet[2982]: E1216 04:11:10.833419 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-knj9k_kube-system(d53735f5-b411-4ae8-af6c-2e5709e7684a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-knj9k_kube-system(d53735f5-b411-4ae8-af6c-2e5709e7684a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18acc96f14781f4cc7c5a376fe75cf1155954fc790ddfd382f273d6856def8a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-knj9k" podUID="d53735f5-b411-4ae8-af6c-2e5709e7684a" Dec 16 04:11:10.834595 containerd[1631]: time="2025-12-16T04:11:10.834523747Z" level=error msg="Failed to destroy network for sandbox \"6faacddeb840f4e7559f0d1cbaf01fd2b4c11fbd9fc2ee61e1845098550230f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:10.841756 containerd[1631]: time="2025-12-16T04:11:10.841563331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6665678475-rs6tq,Uid:698ea2f4-6c38-4f29-af10-d89d447f19d4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6265b49d59ffd7d3de6e2f5f75e77909aef593dbe5211d143063e17765b0c522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:10.843198 kubelet[2982]: E1216 04:11:10.842897 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6265b49d59ffd7d3de6e2f5f75e77909aef593dbe5211d143063e17765b0c522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:10.843198 kubelet[2982]: E1216 04:11:10.843005 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6265b49d59ffd7d3de6e2f5f75e77909aef593dbe5211d143063e17765b0c522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6665678475-rs6tq" Dec 16 04:11:10.843198 kubelet[2982]: E1216 04:11:10.843040 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6265b49d59ffd7d3de6e2f5f75e77909aef593dbe5211d143063e17765b0c522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6665678475-rs6tq" Dec 16 04:11:10.844400 kubelet[2982]: E1216 04:11:10.843112 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6665678475-rs6tq_calico-system(698ea2f4-6c38-4f29-af10-d89d447f19d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6665678475-rs6tq_calico-system(698ea2f4-6c38-4f29-af10-d89d447f19d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6265b49d59ffd7d3de6e2f5f75e77909aef593dbe5211d143063e17765b0c522\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6665678475-rs6tq" podUID="698ea2f4-6c38-4f29-af10-d89d447f19d4" Dec 16 04:11:10.847593 containerd[1631]: time="2025-12-16T04:11:10.847283773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5887594b4-5fm6l,Uid:2b2fbc29-627a-4636-910d-2ada1caf4c64,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6faacddeb840f4e7559f0d1cbaf01fd2b4c11fbd9fc2ee61e1845098550230f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:10.848236 containerd[1631]: time="2025-12-16T04:11:10.847844267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s8rh7,Uid:bad388bf-fcef-4c56-88ec-bd97ca364c03,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32ac99c34c20ff3ea9a16aa5600d12c3abc51403eb0708a7e7920bc22f332db3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:10.848347 kubelet[2982]: E1216 04:11:10.848222 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6faacddeb840f4e7559f0d1cbaf01fd2b4c11fbd9fc2ee61e1845098550230f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:10.848347 kubelet[2982]: E1216 04:11:10.848293 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6faacddeb840f4e7559f0d1cbaf01fd2b4c11fbd9fc2ee61e1845098550230f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" Dec 16 04:11:10.848347 kubelet[2982]: E1216 04:11:10.848331 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6faacddeb840f4e7559f0d1cbaf01fd2b4c11fbd9fc2ee61e1845098550230f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" Dec 16 04:11:10.848552 kubelet[2982]: E1216 04:11:10.848417 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5887594b4-5fm6l_calico-apiserver(2b2fbc29-627a-4636-910d-2ada1caf4c64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5887594b4-5fm6l_calico-apiserver(2b2fbc29-627a-4636-910d-2ada1caf4c64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6faacddeb840f4e7559f0d1cbaf01fd2b4c11fbd9fc2ee61e1845098550230f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" podUID="2b2fbc29-627a-4636-910d-2ada1caf4c64" Dec 16 04:11:10.849844 kubelet[2982]: E1216 04:11:10.848809 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32ac99c34c20ff3ea9a16aa5600d12c3abc51403eb0708a7e7920bc22f332db3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:10.849844 kubelet[2982]: E1216 04:11:10.849410 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32ac99c34c20ff3ea9a16aa5600d12c3abc51403eb0708a7e7920bc22f332db3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s8rh7" Dec 16 04:11:10.849844 kubelet[2982]: E1216 04:11:10.849444 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32ac99c34c20ff3ea9a16aa5600d12c3abc51403eb0708a7e7920bc22f332db3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s8rh7" Dec 16 04:11:10.850542 kubelet[2982]: E1216 04:11:10.849536 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-s8rh7_calico-system(bad388bf-fcef-4c56-88ec-bd97ca364c03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-s8rh7_calico-system(bad388bf-fcef-4c56-88ec-bd97ca364c03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32ac99c34c20ff3ea9a16aa5600d12c3abc51403eb0708a7e7920bc22f332db3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-s8rh7" podUID="bad388bf-fcef-4c56-88ec-bd97ca364c03" Dec 16 04:11:10.995869 containerd[1631]: time="2025-12-16T04:11:10.995636007Z" level=error msg="Failed to destroy network for sandbox \"cd3842ade3f5a215cdc7f884764348224864a27835228bb45f559cc2c4a48152\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:11.005264 containerd[1631]: time="2025-12-16T04:11:11.005127288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cd92j,Uid:d6d2249c-912c-448c-8aa3-089c6b8243d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd3842ade3f5a215cdc7f884764348224864a27835228bb45f559cc2c4a48152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:11.005859 kubelet[2982]: E1216 04:11:11.005639 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd3842ade3f5a215cdc7f884764348224864a27835228bb45f559cc2c4a48152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:11.005859 kubelet[2982]: E1216 04:11:11.005765 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd3842ade3f5a215cdc7f884764348224864a27835228bb45f559cc2c4a48152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cd92j" Dec 16 04:11:11.005859 kubelet[2982]: E1216 04:11:11.005849 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd3842ade3f5a215cdc7f884764348224864a27835228bb45f559cc2c4a48152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cd92j" Dec 16 04:11:11.007345 kubelet[2982]: E1216 04:11:11.005978 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cd92j_calico-system(d6d2249c-912c-448c-8aa3-089c6b8243d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cd92j_calico-system(d6d2249c-912c-448c-8aa3-089c6b8243d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd3842ade3f5a215cdc7f884764348224864a27835228bb45f559cc2c4a48152\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:11:11.008627 containerd[1631]: time="2025-12-16T04:11:11.008578696Z" level=error msg="Failed to destroy network for sandbox \"1cb13e9f8a16e435eec85949e45cc827ccaf0f8f09d20ca37c057652763a4e9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:11.014042 containerd[1631]: time="2025-12-16T04:11:11.013967834Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fjqnd,Uid:9f394ecb-5814-4876-9d24-cba0fe4360b7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cb13e9f8a16e435eec85949e45cc827ccaf0f8f09d20ca37c057652763a4e9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:11.014664 kubelet[2982]: E1216 04:11:11.014401 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cb13e9f8a16e435eec85949e45cc827ccaf0f8f09d20ca37c057652763a4e9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:11.014664 kubelet[2982]: E1216 04:11:11.014480 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cb13e9f8a16e435eec85949e45cc827ccaf0f8f09d20ca37c057652763a4e9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fjqnd" Dec 16 04:11:11.014664 kubelet[2982]: E1216 04:11:11.014517 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cb13e9f8a16e435eec85949e45cc827ccaf0f8f09d20ca37c057652763a4e9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fjqnd" Dec 16 04:11:11.014880 kubelet[2982]: E1216 04:11:11.014582 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fjqnd_kube-system(9f394ecb-5814-4876-9d24-cba0fe4360b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fjqnd_kube-system(9f394ecb-5814-4876-9d24-cba0fe4360b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cb13e9f8a16e435eec85949e45cc827ccaf0f8f09d20ca37c057652763a4e9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fjqnd" podUID="9f394ecb-5814-4876-9d24-cba0fe4360b7" Dec 16 04:11:11.015249 containerd[1631]: time="2025-12-16T04:11:11.014344322Z" level=error msg="Failed to destroy network for sandbox \"df6d76a2a274d9be7bb5423982d715e6d2e8a999a4be4a68abbaa3b70a006fc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:11.017658 containerd[1631]: time="2025-12-16T04:11:11.017519292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5887594b4-svthm,Uid:ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df6d76a2a274d9be7bb5423982d715e6d2e8a999a4be4a68abbaa3b70a006fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:11.018335 kubelet[2982]: E1216 04:11:11.018036 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df6d76a2a274d9be7bb5423982d715e6d2e8a999a4be4a68abbaa3b70a006fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:11.018335 kubelet[2982]: E1216 04:11:11.018096 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df6d76a2a274d9be7bb5423982d715e6d2e8a999a4be4a68abbaa3b70a006fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5887594b4-svthm" Dec 16 04:11:11.018335 kubelet[2982]: E1216 04:11:11.018123 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df6d76a2a274d9be7bb5423982d715e6d2e8a999a4be4a68abbaa3b70a006fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5887594b4-svthm" Dec 16 04:11:11.018548 kubelet[2982]: E1216 04:11:11.018223 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5887594b4-svthm_calico-apiserver(ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5887594b4-svthm_calico-apiserver(ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df6d76a2a274d9be7bb5423982d715e6d2e8a999a4be4a68abbaa3b70a006fc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5887594b4-svthm" podUID="ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1" Dec 16 04:11:11.415634 systemd[1]: run-netns-cni\x2d419b5463\x2dcca5\x2d7853\x2d8682\x2dbac0acb4b5fd.mount: Deactivated successfully. Dec 16 04:11:11.415841 systemd[1]: run-netns-cni\x2d422ada07\x2d2f29\x2dbe88\x2dbe25\x2d0b39cace8f71.mount: Deactivated successfully. Dec 16 04:11:11.415946 systemd[1]: run-netns-cni\x2d857943ab\x2d741e\x2dfedb\x2d0a7e\x2d02a093c40799.mount: Deactivated successfully. Dec 16 04:11:11.416074 systemd[1]: run-netns-cni\x2de960b1b6\x2df3fd\x2d2e89\x2d10ca\x2d1dc6276c4d4f.mount: Deactivated successfully. Dec 16 04:11:11.416194 systemd[1]: run-netns-cni\x2d74ec7db1\x2db957\x2d8999\x2d8e8d\x2d0534a0564086.mount: Deactivated successfully. Dec 16 04:11:15.620257 kubelet[2982]: I1216 04:11:15.620050 2982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 04:11:15.768000 audit[3948]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3948 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:15.778702 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 04:11:15.778938 kernel: audit: type=1325 audit(1765858275.768:580): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3948 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:15.768000 audit[3948]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe7daf9aa0 a2=0 a3=7ffe7daf9a8c items=0 ppid=3120 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:15.789414 kernel: audit: type=1300 audit(1765858275.768:580): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe7daf9aa0 a2=0 a3=7ffe7daf9a8c items=0 ppid=3120 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:15.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:15.794414 kernel: audit: type=1327 audit(1765858275.768:580): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:15.790000 audit[3948]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3948 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:15.790000 audit[3948]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe7daf9aa0 a2=0 a3=7ffe7daf9a8c items=0 ppid=3120 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:15.799831 kernel: audit: type=1325 audit(1765858275.790:581): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3948 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:15.799990 kernel: audit: type=1300 audit(1765858275.790:581): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe7daf9aa0 a2=0 a3=7ffe7daf9a8c items=0 ppid=3120 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:15.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:15.812413 kernel: audit: type=1327 audit(1765858275.790:581): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:21.495728 containerd[1631]: time="2025-12-16T04:11:21.495626173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-knj9k,Uid:d53735f5-b411-4ae8-af6c-2e5709e7684a,Namespace:kube-system,Attempt:0,}" Dec 16 04:11:21.641821 containerd[1631]: time="2025-12-16T04:11:21.503525305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6665678475-rs6tq,Uid:698ea2f4-6c38-4f29-af10-d89d447f19d4,Namespace:calico-system,Attempt:0,}" Dec 16 04:11:22.494801 containerd[1631]: time="2025-12-16T04:11:22.494506925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5887594b4-5fm6l,Uid:2b2fbc29-627a-4636-910d-2ada1caf4c64,Namespace:calico-apiserver,Attempt:0,}" Dec 16 04:11:22.511054 containerd[1631]: time="2025-12-16T04:11:22.509446396Z" level=error msg="Failed to destroy network for sandbox \"4d076ffa30940c8190db183e293b6cc32399e9da55facd692ca37629d9b71d39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:22.517778 systemd[1]: run-netns-cni\x2d8fb834f1\x2d71d2\x2d5c0b\x2d6463\x2dc41dc6b33129.mount: Deactivated successfully. Dec 16 04:11:22.539158 containerd[1631]: time="2025-12-16T04:11:22.537463208Z" level=error msg="Failed to destroy network for sandbox \"4e3d4fb1e9ae3ff369fc48cdf4c36a0a09c62d9b56a01f11aac3a8af5a56f48c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:22.541486 systemd[1]: run-netns-cni\x2defb43432\x2dbc1a\x2d603c\x2dd605\x2de88b84e8a34e.mount: Deactivated successfully. Dec 16 04:11:22.553476 containerd[1631]: time="2025-12-16T04:11:22.553040365Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6665678475-rs6tq,Uid:698ea2f4-6c38-4f29-af10-d89d447f19d4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e3d4fb1e9ae3ff369fc48cdf4c36a0a09c62d9b56a01f11aac3a8af5a56f48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:22.555716 containerd[1631]: time="2025-12-16T04:11:22.555419582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-knj9k,Uid:d53735f5-b411-4ae8-af6c-2e5709e7684a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d076ffa30940c8190db183e293b6cc32399e9da55facd692ca37629d9b71d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:22.562935 kubelet[2982]: E1216 04:11:22.562844 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d076ffa30940c8190db183e293b6cc32399e9da55facd692ca37629d9b71d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:22.565868 kubelet[2982]: E1216 04:11:22.563046 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d076ffa30940c8190db183e293b6cc32399e9da55facd692ca37629d9b71d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-knj9k" Dec 16 04:11:22.565868 kubelet[2982]: E1216 04:11:22.563109 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d076ffa30940c8190db183e293b6cc32399e9da55facd692ca37629d9b71d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-knj9k" Dec 16 04:11:22.565868 kubelet[2982]: E1216 04:11:22.563223 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-knj9k_kube-system(d53735f5-b411-4ae8-af6c-2e5709e7684a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-knj9k_kube-system(d53735f5-b411-4ae8-af6c-2e5709e7684a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d076ffa30940c8190db183e293b6cc32399e9da55facd692ca37629d9b71d39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-knj9k" podUID="d53735f5-b411-4ae8-af6c-2e5709e7684a" Dec 16 04:11:22.567253 kubelet[2982]: E1216 04:11:22.563695 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e3d4fb1e9ae3ff369fc48cdf4c36a0a09c62d9b56a01f11aac3a8af5a56f48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:22.567253 kubelet[2982]: E1216 04:11:22.563790 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e3d4fb1e9ae3ff369fc48cdf4c36a0a09c62d9b56a01f11aac3a8af5a56f48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6665678475-rs6tq" Dec 16 04:11:22.567253 kubelet[2982]: E1216 04:11:22.563850 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e3d4fb1e9ae3ff369fc48cdf4c36a0a09c62d9b56a01f11aac3a8af5a56f48c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6665678475-rs6tq" Dec 16 04:11:22.567554 kubelet[2982]: E1216 04:11:22.564333 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6665678475-rs6tq_calico-system(698ea2f4-6c38-4f29-af10-d89d447f19d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6665678475-rs6tq_calico-system(698ea2f4-6c38-4f29-af10-d89d447f19d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e3d4fb1e9ae3ff369fc48cdf4c36a0a09c62d9b56a01f11aac3a8af5a56f48c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6665678475-rs6tq" podUID="698ea2f4-6c38-4f29-af10-d89d447f19d4" Dec 16 04:11:22.755342 containerd[1631]: time="2025-12-16T04:11:22.754682810Z" level=error msg="Failed to destroy network for sandbox \"beb27bfeb9ae4499b97c87092b44bb8669313ca1cc32c2daea597a18c474fec3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:22.938856 systemd[1]: run-netns-cni\x2d509e7449\x2d8fe8\x2d1b3b\x2d08bb\x2dbc5a4a392adf.mount: Deactivated successfully. Dec 16 04:11:23.024480 containerd[1631]: time="2025-12-16T04:11:23.023614115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5887594b4-5fm6l,Uid:2b2fbc29-627a-4636-910d-2ada1caf4c64,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"beb27bfeb9ae4499b97c87092b44bb8669313ca1cc32c2daea597a18c474fec3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:23.024671 kubelet[2982]: E1216 04:11:23.024387 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beb27bfeb9ae4499b97c87092b44bb8669313ca1cc32c2daea597a18c474fec3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:23.024671 kubelet[2982]: E1216 04:11:23.024461 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beb27bfeb9ae4499b97c87092b44bb8669313ca1cc32c2daea597a18c474fec3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" Dec 16 04:11:23.024671 kubelet[2982]: E1216 04:11:23.024512 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beb27bfeb9ae4499b97c87092b44bb8669313ca1cc32c2daea597a18c474fec3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" Dec 16 04:11:23.024870 kubelet[2982]: E1216 04:11:23.024569 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5887594b4-5fm6l_calico-apiserver(2b2fbc29-627a-4636-910d-2ada1caf4c64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5887594b4-5fm6l_calico-apiserver(2b2fbc29-627a-4636-910d-2ada1caf4c64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"beb27bfeb9ae4499b97c87092b44bb8669313ca1cc32c2daea597a18c474fec3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" podUID="2b2fbc29-627a-4636-910d-2ada1caf4c64" Dec 16 04:11:23.494785 containerd[1631]: time="2025-12-16T04:11:23.494634765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-df4d97596-pnqrb,Uid:0a58bc2a-4639-4181-980c-f9b5b1855f06,Namespace:calico-system,Attempt:0,}" Dec 16 04:11:23.497312 containerd[1631]: time="2025-12-16T04:11:23.497273443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5887594b4-svthm,Uid:ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1,Namespace:calico-apiserver,Attempt:0,}" Dec 16 04:11:23.498079 containerd[1631]: time="2025-12-16T04:11:23.497449909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cd92j,Uid:d6d2249c-912c-448c-8aa3-089c6b8243d1,Namespace:calico-system,Attempt:0,}" Dec 16 04:11:23.704009 containerd[1631]: time="2025-12-16T04:11:23.703845742Z" level=error msg="Failed to destroy network for sandbox \"d360fc10429f201b976c57442e4b6df470668017fbc25626b83277b4517be409\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:23.708063 systemd[1]: run-netns-cni\x2d6983ec1e\x2d32ae\x2d19e1\x2d7813\x2d3b892084e332.mount: Deactivated successfully. Dec 16 04:11:23.720178 containerd[1631]: time="2025-12-16T04:11:23.719765599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5887594b4-svthm,Uid:ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d360fc10429f201b976c57442e4b6df470668017fbc25626b83277b4517be409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:23.720485 kubelet[2982]: E1216 04:11:23.720095 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d360fc10429f201b976c57442e4b6df470668017fbc25626b83277b4517be409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:23.723279 kubelet[2982]: E1216 04:11:23.720950 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d360fc10429f201b976c57442e4b6df470668017fbc25626b83277b4517be409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5887594b4-svthm" Dec 16 04:11:23.723279 kubelet[2982]: E1216 04:11:23.720999 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d360fc10429f201b976c57442e4b6df470668017fbc25626b83277b4517be409\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5887594b4-svthm" Dec 16 04:11:23.723279 kubelet[2982]: E1216 04:11:23.721099 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5887594b4-svthm_calico-apiserver(ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5887594b4-svthm_calico-apiserver(ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d360fc10429f201b976c57442e4b6df470668017fbc25626b83277b4517be409\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5887594b4-svthm" podUID="ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1" Dec 16 04:11:23.757484 containerd[1631]: time="2025-12-16T04:11:23.756728260Z" level=error msg="Failed to destroy network for sandbox \"81b85a6686788b345d1fc45aa61b2dbc9ae9d2b34bfffc0bb45af6e3e024b59c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:23.789806 containerd[1631]: time="2025-12-16T04:11:23.789731380Z" level=error msg="Failed to destroy network for sandbox \"0140579635e68865da82970d57400338071b13971aa0a2240028e83643278da4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:23.865160 containerd[1631]: time="2025-12-16T04:11:23.865093253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-df4d97596-pnqrb,Uid:0a58bc2a-4639-4181-980c-f9b5b1855f06,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"81b85a6686788b345d1fc45aa61b2dbc9ae9d2b34bfffc0bb45af6e3e024b59c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:23.867174 kubelet[2982]: E1216 04:11:23.865719 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81b85a6686788b345d1fc45aa61b2dbc9ae9d2b34bfffc0bb45af6e3e024b59c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:23.867174 kubelet[2982]: E1216 04:11:23.865811 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81b85a6686788b345d1fc45aa61b2dbc9ae9d2b34bfffc0bb45af6e3e024b59c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-df4d97596-pnqrb" Dec 16 04:11:23.867174 kubelet[2982]: E1216 04:11:23.865860 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81b85a6686788b345d1fc45aa61b2dbc9ae9d2b34bfffc0bb45af6e3e024b59c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-df4d97596-pnqrb" Dec 16 04:11:23.867535 kubelet[2982]: E1216 04:11:23.865917 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-df4d97596-pnqrb_calico-system(0a58bc2a-4639-4181-980c-f9b5b1855f06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-df4d97596-pnqrb_calico-system(0a58bc2a-4639-4181-980c-f9b5b1855f06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81b85a6686788b345d1fc45aa61b2dbc9ae9d2b34bfffc0bb45af6e3e024b59c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-df4d97596-pnqrb" podUID="0a58bc2a-4639-4181-980c-f9b5b1855f06" Dec 16 04:11:23.869064 containerd[1631]: time="2025-12-16T04:11:23.869018484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cd92j,Uid:d6d2249c-912c-448c-8aa3-089c6b8243d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0140579635e68865da82970d57400338071b13971aa0a2240028e83643278da4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:23.869410 kubelet[2982]: E1216 04:11:23.869352 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0140579635e68865da82970d57400338071b13971aa0a2240028e83643278da4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:23.869586 kubelet[2982]: E1216 04:11:23.869540 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0140579635e68865da82970d57400338071b13971aa0a2240028e83643278da4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cd92j" Dec 16 04:11:23.869973 kubelet[2982]: E1216 04:11:23.869898 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0140579635e68865da82970d57400338071b13971aa0a2240028e83643278da4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cd92j" Dec 16 04:11:23.870422 kubelet[2982]: E1216 04:11:23.870138 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cd92j_calico-system(d6d2249c-912c-448c-8aa3-089c6b8243d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cd92j_calico-system(d6d2249c-912c-448c-8aa3-089c6b8243d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0140579635e68865da82970d57400338071b13971aa0a2240028e83643278da4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:11:23.938749 systemd[1]: run-netns-cni\x2d5b9cd3b5\x2d9ac0\x2d12c6\x2d2b59\x2db8dea32a4be7.mount: Deactivated successfully. Dec 16 04:11:23.938927 systemd[1]: run-netns-cni\x2d0820e975\x2dd562\x2d6eb4\x2d410b\x2d6bda3c18525a.mount: Deactivated successfully. Dec 16 04:11:24.495792 containerd[1631]: time="2025-12-16T04:11:24.495445597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fjqnd,Uid:9f394ecb-5814-4876-9d24-cba0fe4360b7,Namespace:kube-system,Attempt:0,}" Dec 16 04:11:24.653794 containerd[1631]: time="2025-12-16T04:11:24.653723817Z" level=error msg="Failed to destroy network for sandbox \"abd29272faf6fc05d70e57dec3171d5d04bc12d2fc4aab88036f81d9eaa855b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:24.658317 containerd[1631]: time="2025-12-16T04:11:24.657404897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fjqnd,Uid:9f394ecb-5814-4876-9d24-cba0fe4360b7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd29272faf6fc05d70e57dec3171d5d04bc12d2fc4aab88036f81d9eaa855b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:24.658389 systemd[1]: run-netns-cni\x2d2d6a207e\x2d90c2\x2d6e12\x2d16a9\x2d668bbc8c1c8e.mount: Deactivated successfully. Dec 16 04:11:24.659356 kubelet[2982]: E1216 04:11:24.659286 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd29272faf6fc05d70e57dec3171d5d04bc12d2fc4aab88036f81d9eaa855b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:24.661460 kubelet[2982]: E1216 04:11:24.661416 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd29272faf6fc05d70e57dec3171d5d04bc12d2fc4aab88036f81d9eaa855b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fjqnd" Dec 16 04:11:24.661630 kubelet[2982]: E1216 04:11:24.661468 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd29272faf6fc05d70e57dec3171d5d04bc12d2fc4aab88036f81d9eaa855b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fjqnd" Dec 16 04:11:24.661630 kubelet[2982]: E1216 04:11:24.661540 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fjqnd_kube-system(9f394ecb-5814-4876-9d24-cba0fe4360b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fjqnd_kube-system(9f394ecb-5814-4876-9d24-cba0fe4360b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abd29272faf6fc05d70e57dec3171d5d04bc12d2fc4aab88036f81d9eaa855b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fjqnd" podUID="9f394ecb-5814-4876-9d24-cba0fe4360b7" Dec 16 04:11:25.306265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount138835101.mount: Deactivated successfully. Dec 16 04:11:25.403291 containerd[1631]: time="2025-12-16T04:11:25.403004217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:11:25.415702 containerd[1631]: time="2025-12-16T04:11:25.415537000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 04:11:25.450090 containerd[1631]: time="2025-12-16T04:11:25.449881038Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:11:25.453649 containerd[1631]: time="2025-12-16T04:11:25.453203171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 04:11:25.454262 containerd[1631]: time="2025-12-16T04:11:25.454188728Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 15.566986497s" Dec 16 04:11:25.467126 containerd[1631]: time="2025-12-16T04:11:25.466628375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 04:11:25.541206 containerd[1631]: time="2025-12-16T04:11:25.541152433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s8rh7,Uid:bad388bf-fcef-4c56-88ec-bd97ca364c03,Namespace:calico-system,Attempt:0,}" Dec 16 04:11:25.588192 containerd[1631]: time="2025-12-16T04:11:25.588138292Z" level=info msg="CreateContainer within sandbox \"b14ba40474052be58c56d4e22f81abf0e363d81b36a87832bead22f760ace845\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 04:11:25.689821 containerd[1631]: time="2025-12-16T04:11:25.689774445Z" level=info msg="Container 9da89cd45b2e9ff66fc51f1b8c49f359c902a08b0a849f9c89e6520abaa6a696: CDI devices from CRI Config.CDIDevices: []" Dec 16 04:11:25.732037 containerd[1631]: time="2025-12-16T04:11:25.731931051Z" level=error msg="Failed to destroy network for sandbox \"2f4bd2bf7b47057c76979e91046d3602b8e74123702504f623c30b0356669499\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:25.737454 containerd[1631]: time="2025-12-16T04:11:25.736470370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s8rh7,Uid:bad388bf-fcef-4c56-88ec-bd97ca364c03,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f4bd2bf7b47057c76979e91046d3602b8e74123702504f623c30b0356669499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:25.738638 kubelet[2982]: E1216 04:11:25.738585 2982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f4bd2bf7b47057c76979e91046d3602b8e74123702504f623c30b0356669499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 04:11:25.739658 kubelet[2982]: E1216 04:11:25.738668 2982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f4bd2bf7b47057c76979e91046d3602b8e74123702504f623c30b0356669499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s8rh7" Dec 16 04:11:25.739658 kubelet[2982]: E1216 04:11:25.738700 2982 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f4bd2bf7b47057c76979e91046d3602b8e74123702504f623c30b0356669499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s8rh7" Dec 16 04:11:25.739658 kubelet[2982]: E1216 04:11:25.738760 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-s8rh7_calico-system(bad388bf-fcef-4c56-88ec-bd97ca364c03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-s8rh7_calico-system(bad388bf-fcef-4c56-88ec-bd97ca364c03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f4bd2bf7b47057c76979e91046d3602b8e74123702504f623c30b0356669499\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-s8rh7" podUID="bad388bf-fcef-4c56-88ec-bd97ca364c03" Dec 16 04:11:25.767851 containerd[1631]: time="2025-12-16T04:11:25.767672230Z" level=info msg="CreateContainer within sandbox \"b14ba40474052be58c56d4e22f81abf0e363d81b36a87832bead22f760ace845\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9da89cd45b2e9ff66fc51f1b8c49f359c902a08b0a849f9c89e6520abaa6a696\"" Dec 16 04:11:25.769765 containerd[1631]: time="2025-12-16T04:11:25.768550509Z" level=info msg="StartContainer for \"9da89cd45b2e9ff66fc51f1b8c49f359c902a08b0a849f9c89e6520abaa6a696\"" Dec 16 04:11:25.777727 containerd[1631]: time="2025-12-16T04:11:25.777668179Z" level=info msg="connecting to shim 9da89cd45b2e9ff66fc51f1b8c49f359c902a08b0a849f9c89e6520abaa6a696" address="unix:///run/containerd/s/e12ee8d71d1d5eddf3d6cff44d22087b8118a8a1ade495869de2a831108fec4b" protocol=ttrpc version=3 Dec 16 04:11:25.914804 systemd[1]: Started cri-containerd-9da89cd45b2e9ff66fc51f1b8c49f359c902a08b0a849f9c89e6520abaa6a696.scope - libcontainer container 9da89cd45b2e9ff66fc51f1b8c49f359c902a08b0a849f9c89e6520abaa6a696. Dec 16 04:11:26.047000 audit: BPF prog-id=176 op=LOAD Dec 16 04:11:26.057406 kernel: audit: type=1334 audit(1765858286.047:582): prog-id=176 op=LOAD Dec 16 04:11:26.047000 audit[4166]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3477 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:26.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964613839636434356232653966663636666335316631623863343966 Dec 16 04:11:26.069797 kernel: audit: type=1300 audit(1765858286.047:582): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3477 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:26.069895 kernel: audit: type=1327 audit(1765858286.047:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964613839636434356232653966663636666335316631623863343966 Dec 16 04:11:26.056000 audit: BPF prog-id=177 op=LOAD Dec 16 04:11:26.073584 kernel: audit: type=1334 audit(1765858286.056:583): prog-id=177 op=LOAD Dec 16 04:11:26.075397 kernel: audit: type=1300 audit(1765858286.056:583): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3477 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:26.056000 audit[4166]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3477 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:26.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964613839636434356232653966663636666335316631623863343966 Dec 16 04:11:26.081138 kernel: audit: type=1327 audit(1765858286.056:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964613839636434356232653966663636666335316631623863343966 Dec 16 04:11:26.056000 audit: BPF prog-id=177 op=UNLOAD Dec 16 04:11:26.084880 kernel: audit: type=1334 audit(1765858286.056:584): prog-id=177 op=UNLOAD Dec 16 04:11:26.056000 audit[4166]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3477 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:26.087480 kernel: audit: type=1300 audit(1765858286.056:584): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3477 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:26.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964613839636434356232653966663636666335316631623863343966 Dec 16 04:11:26.056000 audit: BPF prog-id=176 op=UNLOAD Dec 16 04:11:26.097871 kernel: audit: type=1327 audit(1765858286.056:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964613839636434356232653966663636666335316631623863343966 Dec 16 04:11:26.097980 kernel: audit: type=1334 audit(1765858286.056:585): prog-id=176 op=UNLOAD Dec 16 04:11:26.056000 audit[4166]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3477 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:26.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964613839636434356232653966663636666335316631623863343966 Dec 16 04:11:26.056000 audit: BPF prog-id=178 op=LOAD Dec 16 04:11:26.056000 audit[4166]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3477 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:26.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964613839636434356232653966663636666335316631623863343966 Dec 16 04:11:26.143677 containerd[1631]: time="2025-12-16T04:11:26.143626241Z" level=info msg="StartContainer for \"9da89cd45b2e9ff66fc51f1b8c49f359c902a08b0a849f9c89e6520abaa6a696\" returns successfully" Dec 16 04:11:26.309041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount857335638.mount: Deactivated successfully. Dec 16 04:11:26.309203 systemd[1]: run-netns-cni\x2d69bbbf75\x2d68c0\x2dae42\x2d1046\x2d16d04056573a.mount: Deactivated successfully. Dec 16 04:11:26.547674 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 04:11:26.553662 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 04:11:26.949452 kubelet[2982]: I1216 04:11:26.948210 2982 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmlpb\" (UniqueName: \"kubernetes.io/projected/0a58bc2a-4639-4181-980c-f9b5b1855f06-kube-api-access-vmlpb\") pod \"0a58bc2a-4639-4181-980c-f9b5b1855f06\" (UID: \"0a58bc2a-4639-4181-980c-f9b5b1855f06\") " Dec 16 04:11:26.950966 kubelet[2982]: I1216 04:11:26.950274 2982 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a58bc2a-4639-4181-980c-f9b5b1855f06-whisker-ca-bundle\") pod \"0a58bc2a-4639-4181-980c-f9b5b1855f06\" (UID: \"0a58bc2a-4639-4181-980c-f9b5b1855f06\") " Dec 16 04:11:26.950966 kubelet[2982]: I1216 04:11:26.950354 2982 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0a58bc2a-4639-4181-980c-f9b5b1855f06-whisker-backend-key-pair\") pod \"0a58bc2a-4639-4181-980c-f9b5b1855f06\" (UID: \"0a58bc2a-4639-4181-980c-f9b5b1855f06\") " Dec 16 04:11:26.988508 kubelet[2982]: I1216 04:11:26.984795 2982 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a58bc2a-4639-4181-980c-f9b5b1855f06-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0a58bc2a-4639-4181-980c-f9b5b1855f06" (UID: "0a58bc2a-4639-4181-980c-f9b5b1855f06"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 04:11:26.993557 systemd[1]: var-lib-kubelet-pods-0a58bc2a\x2d4639\x2d4181\x2d980c\x2df9b5b1855f06-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvmlpb.mount: Deactivated successfully. Dec 16 04:11:26.994564 kubelet[2982]: I1216 04:11:26.993585 2982 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a58bc2a-4639-4181-980c-f9b5b1855f06-kube-api-access-vmlpb" (OuterVolumeSpecName: "kube-api-access-vmlpb") pod "0a58bc2a-4639-4181-980c-f9b5b1855f06" (UID: "0a58bc2a-4639-4181-980c-f9b5b1855f06"). InnerVolumeSpecName "kube-api-access-vmlpb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 04:11:26.996938 kubelet[2982]: I1216 04:11:26.996898 2982 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a58bc2a-4639-4181-980c-f9b5b1855f06-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0a58bc2a-4639-4181-980c-f9b5b1855f06" (UID: "0a58bc2a-4639-4181-980c-f9b5b1855f06"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 04:11:26.997344 systemd[1]: var-lib-kubelet-pods-0a58bc2a\x2d4639\x2d4181\x2d980c\x2df9b5b1855f06-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 04:11:27.050883 kubelet[2982]: I1216 04:11:27.050824 2982 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vmlpb\" (UniqueName: \"kubernetes.io/projected/0a58bc2a-4639-4181-980c-f9b5b1855f06-kube-api-access-vmlpb\") on node \"srv-cuii1.gb1.brightbox.com\" DevicePath \"\"" Dec 16 04:11:27.050883 kubelet[2982]: I1216 04:11:27.050873 2982 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a58bc2a-4639-4181-980c-f9b5b1855f06-whisker-ca-bundle\") on node \"srv-cuii1.gb1.brightbox.com\" DevicePath \"\"" Dec 16 04:11:27.051146 kubelet[2982]: I1216 04:11:27.050900 2982 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0a58bc2a-4639-4181-980c-f9b5b1855f06-whisker-backend-key-pair\") on node \"srv-cuii1.gb1.brightbox.com\" DevicePath \"\"" Dec 16 04:11:27.065479 systemd[1]: Removed slice kubepods-besteffort-pod0a58bc2a_4639_4181_980c_f9b5b1855f06.slice - libcontainer container kubepods-besteffort-pod0a58bc2a_4639_4181_980c_f9b5b1855f06.slice. Dec 16 04:11:27.146951 kubelet[2982]: I1216 04:11:27.137970 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zqj5q" podStartSLOduration=3.860503494 podStartE2EDuration="37.13790191s" podCreationTimestamp="2025-12-16 04:10:50 +0000 UTC" firstStartedPulling="2025-12-16 04:10:52.190802921 +0000 UTC m=+26.902545522" lastFinishedPulling="2025-12-16 04:11:25.468201324 +0000 UTC m=+60.179943938" observedRunningTime="2025-12-16 04:11:27.13407694 +0000 UTC m=+61.845819579" watchObservedRunningTime="2025-12-16 04:11:27.13790191 +0000 UTC m=+61.849644518" Dec 16 04:11:27.338058 systemd[1]: Created slice kubepods-besteffort-pod5542aa1b_8a7b_412d_a408_cf0a80b3e3bc.slice - libcontainer container kubepods-besteffort-pod5542aa1b_8a7b_412d_a408_cf0a80b3e3bc.slice. Dec 16 04:11:27.366261 kubelet[2982]: I1216 04:11:27.366209 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5542aa1b-8a7b-412d-a408-cf0a80b3e3bc-whisker-ca-bundle\") pod \"whisker-756bfb6d7d-p4kdb\" (UID: \"5542aa1b-8a7b-412d-a408-cf0a80b3e3bc\") " pod="calico-system/whisker-756bfb6d7d-p4kdb" Dec 16 04:11:27.367566 kubelet[2982]: I1216 04:11:27.367536 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df4q7\" (UniqueName: \"kubernetes.io/projected/5542aa1b-8a7b-412d-a408-cf0a80b3e3bc-kube-api-access-df4q7\") pod \"whisker-756bfb6d7d-p4kdb\" (UID: \"5542aa1b-8a7b-412d-a408-cf0a80b3e3bc\") " pod="calico-system/whisker-756bfb6d7d-p4kdb" Dec 16 04:11:27.377677 kubelet[2982]: I1216 04:11:27.377611 2982 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5542aa1b-8a7b-412d-a408-cf0a80b3e3bc-whisker-backend-key-pair\") pod \"whisker-756bfb6d7d-p4kdb\" (UID: \"5542aa1b-8a7b-412d-a408-cf0a80b3e3bc\") " pod="calico-system/whisker-756bfb6d7d-p4kdb" Dec 16 04:11:27.503176 kubelet[2982]: I1216 04:11:27.502870 2982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a58bc2a-4639-4181-980c-f9b5b1855f06" path="/var/lib/kubelet/pods/0a58bc2a-4639-4181-980c-f9b5b1855f06/volumes" Dec 16 04:11:27.646262 containerd[1631]: time="2025-12-16T04:11:27.646111018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-756bfb6d7d-p4kdb,Uid:5542aa1b-8a7b-412d-a408-cf0a80b3e3bc,Namespace:calico-system,Attempt:0,}" Dec 16 04:11:28.187941 systemd-networkd[1555]: cali8c72564175f: Link UP Dec 16 04:11:28.188505 systemd-networkd[1555]: cali8c72564175f: Gained carrier Dec 16 04:11:28.229853 containerd[1631]: 2025-12-16 04:11:27.688 [INFO][4254] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 04:11:28.229853 containerd[1631]: 2025-12-16 04:11:27.734 [INFO][4254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cuii1.gb1.brightbox.com-k8s-whisker--756bfb6d7d--p4kdb-eth0 whisker-756bfb6d7d- calico-system 5542aa1b-8a7b-412d-a408-cf0a80b3e3bc 922 0 2025-12-16 04:11:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:756bfb6d7d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-cuii1.gb1.brightbox.com whisker-756bfb6d7d-p4kdb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8c72564175f [] [] }} ContainerID="84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" Namespace="calico-system" Pod="whisker-756bfb6d7d-p4kdb" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-whisker--756bfb6d7d--p4kdb-" Dec 16 04:11:28.229853 containerd[1631]: 2025-12-16 04:11:27.734 [INFO][4254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" Namespace="calico-system" Pod="whisker-756bfb6d7d-p4kdb" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-whisker--756bfb6d7d--p4kdb-eth0" Dec 16 04:11:28.229853 containerd[1631]: 2025-12-16 04:11:28.059 [INFO][4266] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" HandleID="k8s-pod-network.84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" Workload="srv--cuii1.gb1.brightbox.com-k8s-whisker--756bfb6d7d--p4kdb-eth0" Dec 16 04:11:28.238532 containerd[1631]: 2025-12-16 04:11:28.061 [INFO][4266] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" HandleID="k8s-pod-network.84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" Workload="srv--cuii1.gb1.brightbox.com-k8s-whisker--756bfb6d7d--p4kdb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ca240), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-cuii1.gb1.brightbox.com", "pod":"whisker-756bfb6d7d-p4kdb", "timestamp":"2025-12-16 04:11:28.059274072 +0000 UTC"}, Hostname:"srv-cuii1.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 04:11:28.238532 containerd[1631]: 2025-12-16 04:11:28.061 [INFO][4266] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 04:11:28.238532 containerd[1631]: 2025-12-16 04:11:28.062 [INFO][4266] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 04:11:28.238532 containerd[1631]: 2025-12-16 04:11:28.062 [INFO][4266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cuii1.gb1.brightbox.com' Dec 16 04:11:28.238532 containerd[1631]: 2025-12-16 04:11:28.090 [INFO][4266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:28.238532 containerd[1631]: 2025-12-16 04:11:28.117 [INFO][4266] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:28.238532 containerd[1631]: 2025-12-16 04:11:28.127 [INFO][4266] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:28.238532 containerd[1631]: 2025-12-16 04:11:28.131 [INFO][4266] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:28.238532 containerd[1631]: 2025-12-16 04:11:28.134 [INFO][4266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:28.238968 containerd[1631]: 2025-12-16 04:11:28.134 [INFO][4266] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:28.238968 containerd[1631]: 2025-12-16 04:11:28.136 [INFO][4266] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29 Dec 16 04:11:28.238968 containerd[1631]: 2025-12-16 04:11:28.143 [INFO][4266] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:28.238968 containerd[1631]: 2025-12-16 04:11:28.151 [INFO][4266] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.193/26] block=192.168.103.192/26 handle="k8s-pod-network.84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:28.238968 containerd[1631]: 2025-12-16 04:11:28.152 [INFO][4266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.193/26] handle="k8s-pod-network.84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:28.238968 containerd[1631]: 2025-12-16 04:11:28.152 [INFO][4266] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 04:11:28.238968 containerd[1631]: 2025-12-16 04:11:28.152 [INFO][4266] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.193/26] IPv6=[] ContainerID="84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" HandleID="k8s-pod-network.84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" Workload="srv--cuii1.gb1.brightbox.com-k8s-whisker--756bfb6d7d--p4kdb-eth0" Dec 16 04:11:28.239285 containerd[1631]: 2025-12-16 04:11:28.157 [INFO][4254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" Namespace="calico-system" Pod="whisker-756bfb6d7d-p4kdb" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-whisker--756bfb6d7d--p4kdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-whisker--756bfb6d7d--p4kdb-eth0", GenerateName:"whisker-756bfb6d7d-", Namespace:"calico-system", SelfLink:"", UID:"5542aa1b-8a7b-412d-a408-cf0a80b3e3bc", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 11, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"756bfb6d7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"", Pod:"whisker-756bfb6d7d-p4kdb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.103.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8c72564175f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:28.239285 containerd[1631]: 2025-12-16 04:11:28.157 [INFO][4254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.193/32] ContainerID="84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" Namespace="calico-system" Pod="whisker-756bfb6d7d-p4kdb" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-whisker--756bfb6d7d--p4kdb-eth0" Dec 16 04:11:28.244408 containerd[1631]: 2025-12-16 04:11:28.157 [INFO][4254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c72564175f ContainerID="84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" Namespace="calico-system" Pod="whisker-756bfb6d7d-p4kdb" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-whisker--756bfb6d7d--p4kdb-eth0" Dec 16 04:11:28.244408 containerd[1631]: 2025-12-16 04:11:28.201 [INFO][4254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" Namespace="calico-system" Pod="whisker-756bfb6d7d-p4kdb" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-whisker--756bfb6d7d--p4kdb-eth0" Dec 16 04:11:28.244647 containerd[1631]: 2025-12-16 04:11:28.204 [INFO][4254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" Namespace="calico-system" Pod="whisker-756bfb6d7d-p4kdb" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-whisker--756bfb6d7d--p4kdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-whisker--756bfb6d7d--p4kdb-eth0", GenerateName:"whisker-756bfb6d7d-", Namespace:"calico-system", SelfLink:"", UID:"5542aa1b-8a7b-412d-a408-cf0a80b3e3bc", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 11, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"756bfb6d7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29", Pod:"whisker-756bfb6d7d-p4kdb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.103.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8c72564175f", MAC:"ba:56:34:4d:20:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:28.244761 containerd[1631]: 2025-12-16 04:11:28.221 [INFO][4254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" Namespace="calico-system" Pod="whisker-756bfb6d7d-p4kdb" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-whisker--756bfb6d7d--p4kdb-eth0" Dec 16 04:11:28.476705 containerd[1631]: time="2025-12-16T04:11:28.476550771Z" level=info msg="connecting to shim 84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29" address="unix:///run/containerd/s/d004db23870dd300b690de5d6f724d03cb54fa1b8fc10a0d9622fb54c6b0a7f0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 04:11:28.569082 systemd[1]: Started cri-containerd-84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29.scope - libcontainer container 84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29. Dec 16 04:11:28.610000 audit: BPF prog-id=179 op=LOAD Dec 16 04:11:28.611000 audit: BPF prog-id=180 op=LOAD Dec 16 04:11:28.611000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4315 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:28.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643139353731396263396130376331383161323439326233636363 Dec 16 04:11:28.612000 audit: BPF prog-id=180 op=UNLOAD Dec 16 04:11:28.612000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4315 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:28.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643139353731396263396130376331383161323439326233636363 Dec 16 04:11:28.614000 audit: BPF prog-id=181 op=LOAD Dec 16 04:11:28.614000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4315 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:28.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643139353731396263396130376331383161323439326233636363 Dec 16 04:11:28.614000 audit: BPF prog-id=182 op=LOAD Dec 16 04:11:28.614000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4315 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:28.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643139353731396263396130376331383161323439326233636363 Dec 16 04:11:28.614000 audit: BPF prog-id=182 op=UNLOAD Dec 16 04:11:28.614000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4315 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:28.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643139353731396263396130376331383161323439326233636363 Dec 16 04:11:28.615000 audit: BPF prog-id=181 op=UNLOAD Dec 16 04:11:28.615000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4315 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:28.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643139353731396263396130376331383161323439326233636363 Dec 16 04:11:28.615000 audit: BPF prog-id=183 op=LOAD Dec 16 04:11:28.615000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4315 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:28.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834643139353731396263396130376331383161323439326233636363 Dec 16 04:11:28.726449 containerd[1631]: time="2025-12-16T04:11:28.726398357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-756bfb6d7d-p4kdb,Uid:5542aa1b-8a7b-412d-a408-cf0a80b3e3bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"84d195719bc9a07c181a2492b3ccc9c1e1ef6f5ec4a1279e76cb75db71c50d29\"" Dec 16 04:11:28.732548 containerd[1631]: time="2025-12-16T04:11:28.732174560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 04:11:29.057717 containerd[1631]: time="2025-12-16T04:11:29.057474162Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:29.059818 containerd[1631]: time="2025-12-16T04:11:29.059607175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 04:11:29.060145 containerd[1631]: time="2025-12-16T04:11:29.059659092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:29.060864 kubelet[2982]: E1216 04:11:29.060692 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 04:11:29.072297 kubelet[2982]: E1216 04:11:29.072051 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 04:11:29.106993 kubelet[2982]: E1216 04:11:29.106251 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:48980414ea8b4415945e777cac55846b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-df4q7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-756bfb6d7d-p4kdb_calico-system(5542aa1b-8a7b-412d-a408-cf0a80b3e3bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:29.111454 containerd[1631]: time="2025-12-16T04:11:29.111410627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 04:11:29.281000 audit: BPF prog-id=184 op=LOAD Dec 16 04:11:29.281000 audit[4473]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6e85a6c0 a2=98 a3=1fffffffffffffff items=0 ppid=4357 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.281000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 04:11:29.281000 audit: BPF prog-id=184 op=UNLOAD Dec 16 04:11:29.281000 audit[4473]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff6e85a690 a3=0 items=0 ppid=4357 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.281000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 04:11:29.282000 audit: BPF prog-id=185 op=LOAD Dec 16 04:11:29.282000 audit[4473]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6e85a5a0 a2=94 a3=3 items=0 ppid=4357 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.282000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 04:11:29.282000 audit: BPF prog-id=185 op=UNLOAD Dec 16 04:11:29.282000 audit[4473]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff6e85a5a0 a2=94 a3=3 items=0 ppid=4357 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.282000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 04:11:29.282000 audit: BPF prog-id=186 op=LOAD Dec 16 04:11:29.282000 audit[4473]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6e85a5e0 a2=94 a3=7fff6e85a7c0 items=0 ppid=4357 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.282000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 04:11:29.282000 audit: BPF prog-id=186 op=UNLOAD Dec 16 04:11:29.282000 audit[4473]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff6e85a5e0 a2=94 a3=7fff6e85a7c0 items=0 ppid=4357 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.282000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 04:11:29.286000 audit: BPF prog-id=187 op=LOAD Dec 16 04:11:29.286000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff812d1310 a2=98 a3=3 items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.286000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.286000 audit: BPF prog-id=187 op=UNLOAD Dec 16 04:11:29.286000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff812d12e0 a3=0 items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.286000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.287000 audit: BPF prog-id=188 op=LOAD Dec 16 04:11:29.287000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff812d1100 a2=94 a3=54428f items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.287000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.287000 audit: BPF prog-id=188 op=UNLOAD Dec 16 04:11:29.287000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff812d1100 a2=94 a3=54428f items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.287000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.287000 audit: BPF prog-id=189 op=LOAD Dec 16 04:11:29.287000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff812d1130 a2=94 a3=2 items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.287000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.287000 audit: BPF prog-id=189 op=UNLOAD Dec 16 04:11:29.287000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff812d1130 a2=0 a3=2 items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.287000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.450160 containerd[1631]: time="2025-12-16T04:11:29.450023954Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:29.451449 containerd[1631]: time="2025-12-16T04:11:29.451342642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 04:11:29.451527 containerd[1631]: time="2025-12-16T04:11:29.451402941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:29.451918 kubelet[2982]: E1216 04:11:29.451851 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 04:11:29.452089 kubelet[2982]: E1216 04:11:29.452054 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 04:11:29.452538 kubelet[2982]: E1216 04:11:29.452446 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-df4q7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-756bfb6d7d-p4kdb_calico-system(5542aa1b-8a7b-412d-a408-cf0a80b3e3bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:29.459494 kubelet[2982]: E1216 04:11:29.459168 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-756bfb6d7d-p4kdb" podUID="5542aa1b-8a7b-412d-a408-cf0a80b3e3bc" Dec 16 04:11:29.529000 audit: BPF prog-id=190 op=LOAD Dec 16 04:11:29.529000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff812d0ff0 a2=94 a3=1 items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.529000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.529000 audit: BPF prog-id=190 op=UNLOAD Dec 16 04:11:29.529000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff812d0ff0 a2=94 a3=1 items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.529000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.544000 audit: BPF prog-id=191 op=LOAD Dec 16 04:11:29.544000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff812d0fe0 a2=94 a3=4 items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.544000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.544000 audit: BPF prog-id=191 op=UNLOAD Dec 16 04:11:29.544000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff812d0fe0 a2=0 a3=4 items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.544000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.544000 audit: BPF prog-id=192 op=LOAD Dec 16 04:11:29.544000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff812d0e40 a2=94 a3=5 items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.544000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.544000 audit: BPF prog-id=192 op=UNLOAD Dec 16 04:11:29.544000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff812d0e40 a2=0 a3=5 items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.544000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.544000 audit: BPF prog-id=193 op=LOAD Dec 16 04:11:29.544000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff812d1060 a2=94 a3=6 items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.544000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.545000 audit: BPF prog-id=193 op=UNLOAD Dec 16 04:11:29.545000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff812d1060 a2=0 a3=6 items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.545000 audit: BPF prog-id=194 op=LOAD Dec 16 04:11:29.545000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff812d0810 a2=94 a3=88 items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.545000 audit: BPF prog-id=195 op=LOAD Dec 16 04:11:29.545000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff812d0690 a2=94 a3=2 items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.545000 audit: BPF prog-id=195 op=UNLOAD Dec 16 04:11:29.545000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff812d06c0 a2=0 a3=7fff812d07c0 items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.545000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.546000 audit: BPF prog-id=194 op=UNLOAD Dec 16 04:11:29.546000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2d5afd10 a2=0 a3=2819d9bb6bc8cb2d items=0 ppid=4357 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.546000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 04:11:29.561000 audit: BPF prog-id=196 op=LOAD Dec 16 04:11:29.561000 audit[4477]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb6b638c0 a2=98 a3=1999999999999999 items=0 ppid=4357 pid=4477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.561000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 04:11:29.561000 audit: BPF prog-id=196 op=UNLOAD Dec 16 04:11:29.561000 audit[4477]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffb6b63890 a3=0 items=0 ppid=4357 pid=4477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.561000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 04:11:29.561000 audit: BPF prog-id=197 op=LOAD Dec 16 04:11:29.561000 audit[4477]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb6b637a0 a2=94 a3=ffff items=0 ppid=4357 pid=4477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.561000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 04:11:29.561000 audit: BPF prog-id=197 op=UNLOAD Dec 16 04:11:29.561000 audit[4477]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffb6b637a0 a2=94 a3=ffff items=0 ppid=4357 pid=4477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.561000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 04:11:29.561000 audit: BPF prog-id=198 op=LOAD Dec 16 04:11:29.561000 audit[4477]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb6b637e0 a2=94 a3=7fffb6b639c0 items=0 ppid=4357 pid=4477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.561000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 04:11:29.561000 audit: BPF prog-id=198 op=UNLOAD Dec 16 04:11:29.561000 audit[4477]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffb6b637e0 a2=94 a3=7fffb6b639c0 items=0 ppid=4357 pid=4477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.561000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 04:11:29.675119 systemd-networkd[1555]: vxlan.calico: Link UP Dec 16 04:11:29.675134 systemd-networkd[1555]: vxlan.calico: Gained carrier Dec 16 04:11:29.711000 audit: BPF prog-id=199 op=LOAD Dec 16 04:11:29.711000 audit[4504]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd68b22f10 a2=98 a3=20 items=0 ppid=4357 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.711000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 04:11:29.712000 audit: BPF prog-id=199 op=UNLOAD Dec 16 04:11:29.712000 audit[4504]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd68b22ee0 a3=0 items=0 ppid=4357 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.712000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 04:11:29.713000 audit: BPF prog-id=200 op=LOAD Dec 16 04:11:29.713000 audit[4504]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd68b22d20 a2=94 a3=54428f items=0 ppid=4357 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.713000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 04:11:29.714000 audit: BPF prog-id=200 op=UNLOAD Dec 16 04:11:29.714000 audit[4504]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd68b22d20 a2=94 a3=54428f items=0 ppid=4357 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.714000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 04:11:29.714000 audit: BPF prog-id=201 op=LOAD Dec 16 04:11:29.714000 audit[4504]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd68b22d50 a2=94 a3=2 items=0 ppid=4357 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.714000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 04:11:29.714000 audit: BPF prog-id=201 op=UNLOAD Dec 16 04:11:29.714000 audit[4504]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd68b22d50 a2=0 a3=2 items=0 ppid=4357 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.714000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 04:11:29.715000 audit: BPF prog-id=202 op=LOAD Dec 16 04:11:29.715000 audit[4504]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd68b22b00 a2=94 a3=4 items=0 ppid=4357 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 04:11:29.715000 audit: BPF prog-id=202 op=UNLOAD Dec 16 04:11:29.715000 audit[4504]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd68b22b00 a2=94 a3=4 items=0 ppid=4357 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 04:11:29.715000 audit: BPF prog-id=203 op=LOAD Dec 16 04:11:29.715000 audit[4504]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd68b22c00 a2=94 a3=7ffd68b22d80 items=0 ppid=4357 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 04:11:29.715000 audit: BPF prog-id=203 op=UNLOAD Dec 16 04:11:29.715000 audit[4504]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd68b22c00 a2=0 a3=7ffd68b22d80 items=0 ppid=4357 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.715000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 04:11:29.718000 audit: BPF prog-id=204 op=LOAD Dec 16 04:11:29.718000 audit[4504]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd68b22330 a2=94 a3=2 items=0 ppid=4357 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.718000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 04:11:29.719000 audit: BPF prog-id=204 op=UNLOAD Dec 16 04:11:29.719000 audit[4504]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd68b22330 a2=0 a3=2 items=0 ppid=4357 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.719000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 04:11:29.719000 audit: BPF prog-id=205 op=LOAD Dec 16 04:11:29.719000 audit[4504]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd68b22430 a2=94 a3=30 items=0 ppid=4357 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.719000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 04:11:29.731000 audit: BPF prog-id=206 op=LOAD Dec 16 04:11:29.731000 audit[4508]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe69fc2af0 a2=98 a3=0 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.731000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.731000 audit: BPF prog-id=206 op=UNLOAD Dec 16 04:11:29.731000 audit[4508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe69fc2ac0 a3=0 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.731000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.732000 audit: BPF prog-id=207 op=LOAD Dec 16 04:11:29.732000 audit[4508]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe69fc28e0 a2=94 a3=54428f items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.732000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.732000 audit: BPF prog-id=207 op=UNLOAD Dec 16 04:11:29.732000 audit[4508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe69fc28e0 a2=94 a3=54428f items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.732000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.732000 audit: BPF prog-id=208 op=LOAD Dec 16 04:11:29.732000 audit[4508]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe69fc2910 a2=94 a3=2 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.732000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.732000 audit: BPF prog-id=208 op=UNLOAD Dec 16 04:11:29.732000 audit[4508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe69fc2910 a2=0 a3=2 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.732000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.767698 systemd-networkd[1555]: cali8c72564175f: Gained IPv6LL Dec 16 04:11:29.971000 audit: BPF prog-id=209 op=LOAD Dec 16 04:11:29.971000 audit[4508]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe69fc27d0 a2=94 a3=1 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.971000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.971000 audit: BPF prog-id=209 op=UNLOAD Dec 16 04:11:29.971000 audit[4508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe69fc27d0 a2=94 a3=1 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.971000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.986000 audit: BPF prog-id=210 op=LOAD Dec 16 04:11:29.986000 audit[4508]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe69fc27c0 a2=94 a3=4 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.986000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.987000 audit: BPF prog-id=210 op=UNLOAD Dec 16 04:11:29.987000 audit[4508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe69fc27c0 a2=0 a3=4 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.987000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.988000 audit: BPF prog-id=211 op=LOAD Dec 16 04:11:29.988000 audit[4508]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe69fc2620 a2=94 a3=5 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.988000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.988000 audit: BPF prog-id=211 op=UNLOAD Dec 16 04:11:29.988000 audit[4508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe69fc2620 a2=0 a3=5 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.988000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.988000 audit: BPF prog-id=212 op=LOAD Dec 16 04:11:29.988000 audit[4508]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe69fc2840 a2=94 a3=6 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.988000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.988000 audit: BPF prog-id=212 op=UNLOAD Dec 16 04:11:29.988000 audit[4508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe69fc2840 a2=0 a3=6 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.988000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.988000 audit: BPF prog-id=213 op=LOAD Dec 16 04:11:29.988000 audit[4508]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe69fc1ff0 a2=94 a3=88 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.988000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.988000 audit: BPF prog-id=214 op=LOAD Dec 16 04:11:29.988000 audit[4508]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe69fc1e70 a2=94 a3=2 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.988000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.989000 audit: BPF prog-id=214 op=UNLOAD Dec 16 04:11:29.989000 audit[4508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe69fc1ea0 a2=0 a3=7ffe69fc1fa0 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.989000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.989000 audit: BPF prog-id=213 op=UNLOAD Dec 16 04:11:29.989000 audit[4508]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=14257d10 a2=0 a3=5ff1166d2d8ffe04 items=0 ppid=4357 pid=4508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.989000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 04:11:29.997000 audit: BPF prog-id=205 op=UNLOAD Dec 16 04:11:29.997000 audit[4357]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0013f2440 a2=0 a3=0 items=0 ppid=4320 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:29.997000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 04:11:30.079000 audit[4543]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4543 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 04:11:30.079000 audit[4543]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffff07c8890 a2=0 a3=7ffff07c887c items=0 ppid=4357 pid=4543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:30.079000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 04:11:30.089459 kubelet[2982]: E1216 04:11:30.089220 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-756bfb6d7d-p4kdb" podUID="5542aa1b-8a7b-412d-a408-cf0a80b3e3bc" Dec 16 04:11:30.089000 audit[4542]: NETFILTER_CFG table=raw:122 family=2 entries=21 op=nft_register_chain pid=4542 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 04:11:30.089000 audit[4542]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcf26d9110 a2=0 a3=7ffcf26d90fc items=0 ppid=4357 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:30.089000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 04:11:30.102000 audit[4548]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4548 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 04:11:30.102000 audit[4548]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffce5e6ce80 a2=0 a3=7ffce5e6ce6c items=0 ppid=4357 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:30.102000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 04:11:30.119000 audit[4546]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4546 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 04:11:30.119000 audit[4546]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffcc74fbe80 a2=0 a3=7ffcc74fbe6c items=0 ppid=4357 pid=4546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:30.119000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 04:11:30.160000 audit[4556]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4556 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:30.160000 audit[4556]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd221406d0 a2=0 a3=7ffd221406bc items=0 ppid=3120 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:30.160000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:30.166000 audit[4556]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4556 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:30.166000 audit[4556]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd221406d0 a2=0 a3=0 items=0 ppid=3120 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:30.166000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:30.919796 systemd-networkd[1555]: vxlan.calico: Gained IPv6LL Dec 16 04:11:35.501630 containerd[1631]: time="2025-12-16T04:11:35.501035219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5887594b4-5fm6l,Uid:2b2fbc29-627a-4636-910d-2ada1caf4c64,Namespace:calico-apiserver,Attempt:0,}" Dec 16 04:11:35.734850 systemd-networkd[1555]: cali27d510db67f: Link UP Dec 16 04:11:35.736561 systemd-networkd[1555]: cali27d510db67f: Gained carrier Dec 16 04:11:35.757517 containerd[1631]: 2025-12-16 04:11:35.591 [INFO][4571] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--5fm6l-eth0 calico-apiserver-5887594b4- calico-apiserver 2b2fbc29-627a-4636-910d-2ada1caf4c64 822 0 2025-12-16 04:10:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5887594b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-cuii1.gb1.brightbox.com calico-apiserver-5887594b4-5fm6l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali27d510db67f [] [] }} ContainerID="028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-5fm6l" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--5fm6l-" Dec 16 04:11:35.757517 containerd[1631]: 2025-12-16 04:11:35.593 [INFO][4571] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-5fm6l" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--5fm6l-eth0" Dec 16 04:11:35.757517 containerd[1631]: 2025-12-16 04:11:35.662 [INFO][4583] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" HandleID="k8s-pod-network.028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" Workload="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--5fm6l-eth0" Dec 16 04:11:35.757808 containerd[1631]: 2025-12-16 04:11:35.662 [INFO][4583] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" HandleID="k8s-pod-network.028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" Workload="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--5fm6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cde70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-cuii1.gb1.brightbox.com", "pod":"calico-apiserver-5887594b4-5fm6l", "timestamp":"2025-12-16 04:11:35.66213196 +0000 UTC"}, Hostname:"srv-cuii1.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 04:11:35.757808 containerd[1631]: 2025-12-16 04:11:35.662 [INFO][4583] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 04:11:35.757808 containerd[1631]: 2025-12-16 04:11:35.662 [INFO][4583] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 04:11:35.757808 containerd[1631]: 2025-12-16 04:11:35.662 [INFO][4583] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cuii1.gb1.brightbox.com' Dec 16 04:11:35.757808 containerd[1631]: 2025-12-16 04:11:35.679 [INFO][4583] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:35.757808 containerd[1631]: 2025-12-16 04:11:35.685 [INFO][4583] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:35.757808 containerd[1631]: 2025-12-16 04:11:35.691 [INFO][4583] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:35.757808 containerd[1631]: 2025-12-16 04:11:35.695 [INFO][4583] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:35.757808 containerd[1631]: 2025-12-16 04:11:35.699 [INFO][4583] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:35.759120 containerd[1631]: 2025-12-16 04:11:35.699 [INFO][4583] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:35.759120 containerd[1631]: 2025-12-16 04:11:35.702 [INFO][4583] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4 Dec 16 04:11:35.759120 containerd[1631]: 2025-12-16 04:11:35.712 [INFO][4583] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:35.759120 containerd[1631]: 2025-12-16 04:11:35.722 [INFO][4583] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.194/26] block=192.168.103.192/26 handle="k8s-pod-network.028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:35.759120 containerd[1631]: 2025-12-16 04:11:35.722 [INFO][4583] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.194/26] handle="k8s-pod-network.028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:35.759120 containerd[1631]: 2025-12-16 04:11:35.722 [INFO][4583] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 04:11:35.759120 containerd[1631]: 2025-12-16 04:11:35.722 [INFO][4583] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.194/26] IPv6=[] ContainerID="028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" HandleID="k8s-pod-network.028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" Workload="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--5fm6l-eth0" Dec 16 04:11:35.760498 containerd[1631]: 2025-12-16 04:11:35.727 [INFO][4571] cni-plugin/k8s.go 418: Populated endpoint ContainerID="028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-5fm6l" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--5fm6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--5fm6l-eth0", GenerateName:"calico-apiserver-5887594b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b2fbc29-627a-4636-910d-2ada1caf4c64", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 10, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5887594b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-5887594b4-5fm6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27d510db67f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:35.760614 containerd[1631]: 2025-12-16 04:11:35.727 [INFO][4571] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.194/32] ContainerID="028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-5fm6l" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--5fm6l-eth0" Dec 16 04:11:35.760614 containerd[1631]: 2025-12-16 04:11:35.727 [INFO][4571] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27d510db67f ContainerID="028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-5fm6l" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--5fm6l-eth0" Dec 16 04:11:35.760614 containerd[1631]: 2025-12-16 04:11:35.735 [INFO][4571] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-5fm6l" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--5fm6l-eth0" Dec 16 04:11:35.760744 containerd[1631]: 2025-12-16 04:11:35.737 [INFO][4571] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-5fm6l" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--5fm6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--5fm6l-eth0", GenerateName:"calico-apiserver-5887594b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b2fbc29-627a-4636-910d-2ada1caf4c64", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 10, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5887594b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4", Pod:"calico-apiserver-5887594b4-5fm6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali27d510db67f", MAC:"b6:ee:13:c2:dd:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:35.760837 containerd[1631]: 2025-12-16 04:11:35.747 [INFO][4571] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-5fm6l" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--5fm6l-eth0" Dec 16 04:11:35.784000 audit[4600]: NETFILTER_CFG table=filter:127 family=2 entries=50 op=nft_register_chain pid=4600 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 04:11:35.789717 kernel: kauditd_printk_skb: 231 callbacks suppressed Dec 16 04:11:35.789831 kernel: audit: type=1325 audit(1765858295.784:663): table=filter:127 family=2 entries=50 op=nft_register_chain pid=4600 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 04:11:35.784000 audit[4600]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffc84985490 a2=0 a3=7ffc8498547c items=0 ppid=4357 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:35.795052 kernel: audit: type=1300 audit(1765858295.784:663): arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffc84985490 a2=0 a3=7ffc8498547c items=0 ppid=4357 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:35.784000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 04:11:35.803403 kernel: audit: type=1327 audit(1765858295.784:663): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 04:11:35.849994 containerd[1631]: time="2025-12-16T04:11:35.849916699Z" level=info msg="connecting to shim 028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4" address="unix:///run/containerd/s/70b22015a58645a3152d3cb7a8a323d329d695b423c36c3c3ee3a85a372d8127" namespace=k8s.io protocol=ttrpc version=3 Dec 16 04:11:35.888808 systemd[1]: Started cri-containerd-028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4.scope - libcontainer container 028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4. Dec 16 04:11:35.913000 audit: BPF prog-id=215 op=LOAD Dec 16 04:11:35.917552 kernel: audit: type=1334 audit(1765858295.913:664): prog-id=215 op=LOAD Dec 16 04:11:35.915000 audit: BPF prog-id=216 op=LOAD Dec 16 04:11:35.919435 kernel: audit: type=1334 audit(1765858295.915:665): prog-id=216 op=LOAD Dec 16 04:11:35.915000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:35.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032386635656438363736303364343534343139613830333164313563 Dec 16 04:11:35.926579 kernel: audit: type=1300 audit(1765858295.915:665): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:35.926663 kernel: audit: type=1327 audit(1765858295.915:665): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032386635656438363736303364343534343139613830333164313563 Dec 16 04:11:35.915000 audit: BPF prog-id=216 op=UNLOAD Dec 16 04:11:35.931178 kernel: audit: type=1334 audit(1765858295.915:666): prog-id=216 op=UNLOAD Dec 16 04:11:35.931284 kernel: audit: type=1300 audit(1765858295.915:666): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:35.915000 audit[4621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:35.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032386635656438363736303364343534343139613830333164313563 Dec 16 04:11:35.937949 kernel: audit: type=1327 audit(1765858295.915:666): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032386635656438363736303364343534343139613830333164313563 Dec 16 04:11:35.916000 audit: BPF prog-id=217 op=LOAD Dec 16 04:11:35.916000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:35.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032386635656438363736303364343534343139613830333164313563 Dec 16 04:11:35.917000 audit: BPF prog-id=218 op=LOAD Dec 16 04:11:35.917000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:35.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032386635656438363736303364343534343139613830333164313563 Dec 16 04:11:35.917000 audit: BPF prog-id=218 op=UNLOAD Dec 16 04:11:35.917000 audit[4621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:35.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032386635656438363736303364343534343139613830333164313563 Dec 16 04:11:35.917000 audit: BPF prog-id=217 op=UNLOAD Dec 16 04:11:35.917000 audit[4621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:35.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032386635656438363736303364343534343139613830333164313563 Dec 16 04:11:35.917000 audit: BPF prog-id=219 op=LOAD Dec 16 04:11:35.917000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4610 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:35.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032386635656438363736303364343534343139613830333164313563 Dec 16 04:11:35.997067 containerd[1631]: time="2025-12-16T04:11:35.997005733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5887594b4-5fm6l,Uid:2b2fbc29-627a-4636-910d-2ada1caf4c64,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"028f5ed867603d454419a8031d15c3ad79409a7c237ff562d4b27e3f98fe4db4\"" Dec 16 04:11:35.999564 containerd[1631]: time="2025-12-16T04:11:35.999521941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 04:11:36.336948 containerd[1631]: time="2025-12-16T04:11:36.336848032Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:36.338085 containerd[1631]: time="2025-12-16T04:11:36.338033411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 04:11:36.338176 containerd[1631]: time="2025-12-16T04:11:36.338150006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:36.338863 kubelet[2982]: E1216 04:11:36.338437 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:11:36.338863 kubelet[2982]: E1216 04:11:36.338509 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:11:36.338863 kubelet[2982]: E1216 04:11:36.338714 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdgpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5887594b4-5fm6l_calico-apiserver(2b2fbc29-627a-4636-910d-2ada1caf4c64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:36.340028 kubelet[2982]: E1216 04:11:36.339984 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" podUID="2b2fbc29-627a-4636-910d-2ada1caf4c64" Dec 16 04:11:36.493411 containerd[1631]: time="2025-12-16T04:11:36.493241763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6665678475-rs6tq,Uid:698ea2f4-6c38-4f29-af10-d89d447f19d4,Namespace:calico-system,Attempt:0,}" Dec 16 04:11:36.678451 systemd-networkd[1555]: cali018b7510a10: Link UP Dec 16 04:11:36.680310 systemd-networkd[1555]: cali018b7510a10: Gained carrier Dec 16 04:11:36.705153 containerd[1631]: 2025-12-16 04:11:36.561 [INFO][4650] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cuii1.gb1.brightbox.com-k8s-calico--kube--controllers--6665678475--rs6tq-eth0 calico-kube-controllers-6665678475- calico-system 698ea2f4-6c38-4f29-af10-d89d447f19d4 826 0 2025-12-16 04:10:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6665678475 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-cuii1.gb1.brightbox.com calico-kube-controllers-6665678475-rs6tq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali018b7510a10 [] [] }} ContainerID="1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" Namespace="calico-system" Pod="calico-kube-controllers-6665678475-rs6tq" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--kube--controllers--6665678475--rs6tq-" Dec 16 04:11:36.705153 containerd[1631]: 2025-12-16 04:11:36.562 [INFO][4650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" Namespace="calico-system" Pod="calico-kube-controllers-6665678475-rs6tq" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--kube--controllers--6665678475--rs6tq-eth0" Dec 16 04:11:36.705153 containerd[1631]: 2025-12-16 04:11:36.609 [INFO][4662] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" HandleID="k8s-pod-network.1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" Workload="srv--cuii1.gb1.brightbox.com-k8s-calico--kube--controllers--6665678475--rs6tq-eth0" Dec 16 04:11:36.706035 containerd[1631]: 2025-12-16 04:11:36.610 [INFO][4662] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" HandleID="k8s-pod-network.1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" Workload="srv--cuii1.gb1.brightbox.com-k8s-calico--kube--controllers--6665678475--rs6tq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-cuii1.gb1.brightbox.com", "pod":"calico-kube-controllers-6665678475-rs6tq", "timestamp":"2025-12-16 04:11:36.60995282 +0000 UTC"}, Hostname:"srv-cuii1.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 04:11:36.706035 containerd[1631]: 2025-12-16 04:11:36.610 [INFO][4662] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 04:11:36.706035 containerd[1631]: 2025-12-16 04:11:36.611 [INFO][4662] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 04:11:36.706035 containerd[1631]: 2025-12-16 04:11:36.611 [INFO][4662] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cuii1.gb1.brightbox.com' Dec 16 04:11:36.706035 containerd[1631]: 2025-12-16 04:11:36.621 [INFO][4662] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:36.706035 containerd[1631]: 2025-12-16 04:11:36.632 [INFO][4662] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:36.706035 containerd[1631]: 2025-12-16 04:11:36.642 [INFO][4662] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:36.706035 containerd[1631]: 2025-12-16 04:11:36.644 [INFO][4662] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:36.706035 containerd[1631]: 2025-12-16 04:11:36.647 [INFO][4662] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:36.706501 containerd[1631]: 2025-12-16 04:11:36.647 [INFO][4662] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:36.706501 containerd[1631]: 2025-12-16 04:11:36.650 [INFO][4662] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8 Dec 16 04:11:36.706501 containerd[1631]: 2025-12-16 04:11:36.655 [INFO][4662] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:36.706501 containerd[1631]: 2025-12-16 04:11:36.668 [INFO][4662] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.195/26] block=192.168.103.192/26 handle="k8s-pod-network.1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:36.706501 containerd[1631]: 2025-12-16 04:11:36.668 [INFO][4662] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.195/26] handle="k8s-pod-network.1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:36.706501 containerd[1631]: 2025-12-16 04:11:36.669 [INFO][4662] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 04:11:36.706501 containerd[1631]: 2025-12-16 04:11:36.669 [INFO][4662] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.195/26] IPv6=[] ContainerID="1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" HandleID="k8s-pod-network.1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" Workload="srv--cuii1.gb1.brightbox.com-k8s-calico--kube--controllers--6665678475--rs6tq-eth0" Dec 16 04:11:36.706847 containerd[1631]: 2025-12-16 04:11:36.672 [INFO][4650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" Namespace="calico-system" Pod="calico-kube-controllers-6665678475-rs6tq" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--kube--controllers--6665678475--rs6tq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-calico--kube--controllers--6665678475--rs6tq-eth0", GenerateName:"calico-kube-controllers-6665678475-", Namespace:"calico-system", SelfLink:"", UID:"698ea2f4-6c38-4f29-af10-d89d447f19d4", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 10, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6665678475", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-6665678475-rs6tq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali018b7510a10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:36.706957 containerd[1631]: 2025-12-16 04:11:36.672 [INFO][4650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.195/32] ContainerID="1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" Namespace="calico-system" Pod="calico-kube-controllers-6665678475-rs6tq" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--kube--controllers--6665678475--rs6tq-eth0" Dec 16 04:11:36.706957 containerd[1631]: 2025-12-16 04:11:36.672 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali018b7510a10 ContainerID="1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" Namespace="calico-system" Pod="calico-kube-controllers-6665678475-rs6tq" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--kube--controllers--6665678475--rs6tq-eth0" Dec 16 04:11:36.706957 containerd[1631]: 2025-12-16 04:11:36.680 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" Namespace="calico-system" Pod="calico-kube-controllers-6665678475-rs6tq" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--kube--controllers--6665678475--rs6tq-eth0" Dec 16 04:11:36.707126 containerd[1631]: 2025-12-16 04:11:36.681 [INFO][4650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" Namespace="calico-system" Pod="calico-kube-controllers-6665678475-rs6tq" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--kube--controllers--6665678475--rs6tq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-calico--kube--controllers--6665678475--rs6tq-eth0", GenerateName:"calico-kube-controllers-6665678475-", Namespace:"calico-system", SelfLink:"", UID:"698ea2f4-6c38-4f29-af10-d89d447f19d4", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 10, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6665678475", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8", Pod:"calico-kube-controllers-6665678475-rs6tq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali018b7510a10", MAC:"ce:6a:08:2d:31:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:36.707220 containerd[1631]: 2025-12-16 04:11:36.698 [INFO][4650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" Namespace="calico-system" Pod="calico-kube-controllers-6665678475-rs6tq" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--kube--controllers--6665678475--rs6tq-eth0" Dec 16 04:11:36.741000 audit[4676]: NETFILTER_CFG table=filter:128 family=2 entries=40 op=nft_register_chain pid=4676 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 04:11:36.741000 audit[4676]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffe96737080 a2=0 a3=7ffe9673706c items=0 ppid=4357 pid=4676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:36.741000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 04:11:36.829136 containerd[1631]: time="2025-12-16T04:11:36.829063303Z" level=info msg="connecting to shim 1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8" address="unix:///run/containerd/s/a21b2bcc1a2fac503be8ae0890a2e2d34e2a83c0290e71ab07bd96e2e2485a32" namespace=k8s.io protocol=ttrpc version=3 Dec 16 04:11:36.870733 systemd[1]: Started cri-containerd-1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8.scope - libcontainer container 1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8. Dec 16 04:11:36.890000 audit: BPF prog-id=220 op=LOAD Dec 16 04:11:36.891000 audit: BPF prog-id=221 op=LOAD Dec 16 04:11:36.891000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4685 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:36.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166656463366332306637343830363635346362653164666439343263 Dec 16 04:11:36.891000 audit: BPF prog-id=221 op=UNLOAD Dec 16 04:11:36.891000 audit[4698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4685 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:36.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166656463366332306637343830363635346362653164666439343263 Dec 16 04:11:36.891000 audit: BPF prog-id=222 op=LOAD Dec 16 04:11:36.891000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4685 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:36.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166656463366332306637343830363635346362653164666439343263 Dec 16 04:11:36.891000 audit: BPF prog-id=223 op=LOAD Dec 16 04:11:36.891000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4685 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:36.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166656463366332306637343830363635346362653164666439343263 Dec 16 04:11:36.891000 audit: BPF prog-id=223 op=UNLOAD Dec 16 04:11:36.891000 audit[4698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4685 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:36.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166656463366332306637343830363635346362653164666439343263 Dec 16 04:11:36.892000 audit: BPF prog-id=222 op=UNLOAD Dec 16 04:11:36.892000 audit[4698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4685 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:36.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166656463366332306637343830363635346362653164666439343263 Dec 16 04:11:36.892000 audit: BPF prog-id=224 op=LOAD Dec 16 04:11:36.892000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4685 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:36.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166656463366332306637343830363635346362653164666439343263 Dec 16 04:11:36.968365 containerd[1631]: time="2025-12-16T04:11:36.968136700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6665678475-rs6tq,Uid:698ea2f4-6c38-4f29-af10-d89d447f19d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"1fedc6c20f74806654cbe1dfd942c10f9ff3258bb6cda630d5f98147bf4c98c8\"" Dec 16 04:11:36.971914 containerd[1631]: time="2025-12-16T04:11:36.971828540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 04:11:37.143270 kubelet[2982]: E1216 04:11:37.142988 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" podUID="2b2fbc29-627a-4636-910d-2ada1caf4c64" Dec 16 04:11:37.177000 audit[4726]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4726 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:37.177000 audit[4726]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe0ba1e5a0 a2=0 a3=7ffe0ba1e58c items=0 ppid=3120 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:37.177000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:37.183000 audit[4726]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4726 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:37.183000 audit[4726]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe0ba1e5a0 a2=0 a3=0 items=0 ppid=3120 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:37.183000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:37.191613 systemd-networkd[1555]: cali27d510db67f: Gained IPv6LL Dec 16 04:11:37.291130 containerd[1631]: time="2025-12-16T04:11:37.290908913Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:37.308474 containerd[1631]: time="2025-12-16T04:11:37.308361131Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 04:11:37.308851 containerd[1631]: time="2025-12-16T04:11:37.308652876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:37.309328 kubelet[2982]: E1216 04:11:37.309223 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 04:11:37.309549 kubelet[2982]: E1216 04:11:37.309304 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 04:11:37.310643 kubelet[2982]: E1216 04:11:37.310549 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ch2vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6665678475-rs6tq_calico-system(698ea2f4-6c38-4f29-af10-d89d447f19d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:37.311914 kubelet[2982]: E1216 04:11:37.311870 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6665678475-rs6tq" podUID="698ea2f4-6c38-4f29-af10-d89d447f19d4" Dec 16 04:11:37.498554 containerd[1631]: time="2025-12-16T04:11:37.498419427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-knj9k,Uid:d53735f5-b411-4ae8-af6c-2e5709e7684a,Namespace:kube-system,Attempt:0,}" Dec 16 04:11:37.728916 systemd-networkd[1555]: cali5d2ebc25b3d: Link UP Dec 16 04:11:37.731492 systemd-networkd[1555]: cali5d2ebc25b3d: Gained carrier Dec 16 04:11:37.761468 containerd[1631]: 2025-12-16 04:11:37.580 [INFO][4728] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--knj9k-eth0 coredns-668d6bf9bc- kube-system d53735f5-b411-4ae8-af6c-2e5709e7684a 820 0 2025-12-16 04:10:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-cuii1.gb1.brightbox.com coredns-668d6bf9bc-knj9k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5d2ebc25b3d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" Namespace="kube-system" Pod="coredns-668d6bf9bc-knj9k" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--knj9k-" Dec 16 04:11:37.761468 containerd[1631]: 2025-12-16 04:11:37.581 [INFO][4728] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" Namespace="kube-system" Pod="coredns-668d6bf9bc-knj9k" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--knj9k-eth0" Dec 16 04:11:37.761468 containerd[1631]: 2025-12-16 04:11:37.654 [INFO][4739] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" HandleID="k8s-pod-network.f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" Workload="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--knj9k-eth0" Dec 16 04:11:37.763578 containerd[1631]: 2025-12-16 04:11:37.654 [INFO][4739] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" HandleID="k8s-pod-network.f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" Workload="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--knj9k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036dc90), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-cuii1.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-knj9k", "timestamp":"2025-12-16 04:11:37.654599717 +0000 UTC"}, Hostname:"srv-cuii1.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 04:11:37.763578 containerd[1631]: 2025-12-16 04:11:37.655 [INFO][4739] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 04:11:37.763578 containerd[1631]: 2025-12-16 04:11:37.655 [INFO][4739] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 04:11:37.763578 containerd[1631]: 2025-12-16 04:11:37.655 [INFO][4739] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cuii1.gb1.brightbox.com' Dec 16 04:11:37.763578 containerd[1631]: 2025-12-16 04:11:37.666 [INFO][4739] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:37.763578 containerd[1631]: 2025-12-16 04:11:37.677 [INFO][4739] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:37.763578 containerd[1631]: 2025-12-16 04:11:37.686 [INFO][4739] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:37.763578 containerd[1631]: 2025-12-16 04:11:37.690 [INFO][4739] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:37.763578 containerd[1631]: 2025-12-16 04:11:37.693 [INFO][4739] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:37.764486 containerd[1631]: 2025-12-16 04:11:37.693 [INFO][4739] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:37.764486 containerd[1631]: 2025-12-16 04:11:37.696 [INFO][4739] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f Dec 16 04:11:37.764486 containerd[1631]: 2025-12-16 04:11:37.704 [INFO][4739] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:37.764486 containerd[1631]: 2025-12-16 04:11:37.715 [INFO][4739] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.196/26] block=192.168.103.192/26 handle="k8s-pod-network.f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:37.764486 containerd[1631]: 2025-12-16 04:11:37.715 [INFO][4739] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.196/26] handle="k8s-pod-network.f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:37.764486 containerd[1631]: 2025-12-16 04:11:37.715 [INFO][4739] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 04:11:37.764486 containerd[1631]: 2025-12-16 04:11:37.716 [INFO][4739] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.196/26] IPv6=[] ContainerID="f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" HandleID="k8s-pod-network.f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" Workload="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--knj9k-eth0" Dec 16 04:11:37.764769 containerd[1631]: 2025-12-16 04:11:37.720 [INFO][4728] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" Namespace="kube-system" Pod="coredns-668d6bf9bc-knj9k" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--knj9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--knj9k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d53735f5-b411-4ae8-af6c-2e5709e7684a", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 10, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-knj9k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d2ebc25b3d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:37.764769 containerd[1631]: 2025-12-16 04:11:37.721 [INFO][4728] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.196/32] ContainerID="f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" Namespace="kube-system" Pod="coredns-668d6bf9bc-knj9k" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--knj9k-eth0" Dec 16 04:11:37.764769 containerd[1631]: 2025-12-16 04:11:37.721 [INFO][4728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d2ebc25b3d ContainerID="f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" Namespace="kube-system" Pod="coredns-668d6bf9bc-knj9k" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--knj9k-eth0" Dec 16 04:11:37.764769 containerd[1631]: 2025-12-16 04:11:37.732 [INFO][4728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" Namespace="kube-system" Pod="coredns-668d6bf9bc-knj9k" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--knj9k-eth0" Dec 16 04:11:37.764769 containerd[1631]: 2025-12-16 04:11:37.734 [INFO][4728] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" Namespace="kube-system" Pod="coredns-668d6bf9bc-knj9k" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--knj9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--knj9k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d53735f5-b411-4ae8-af6c-2e5709e7684a", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 10, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f", Pod:"coredns-668d6bf9bc-knj9k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d2ebc25b3d", MAC:"0a:65:f4:01:1d:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:37.764769 containerd[1631]: 2025-12-16 04:11:37.757 [INFO][4728] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" Namespace="kube-system" Pod="coredns-668d6bf9bc-knj9k" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--knj9k-eth0" Dec 16 04:11:37.819470 containerd[1631]: time="2025-12-16T04:11:37.818719076Z" level=info msg="connecting to shim f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f" address="unix:///run/containerd/s/4271d82ff3c990ca638b92a00fee41062bb156b3f706a06ba0c12c34a26b1af4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 04:11:37.843000 audit[4774]: NETFILTER_CFG table=filter:131 family=2 entries=50 op=nft_register_chain pid=4774 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 04:11:37.843000 audit[4774]: SYSCALL arch=c000003e syscall=46 success=yes exit=24928 a0=3 a1=7fff4d4a2b80 a2=0 a3=7fff4d4a2b6c items=0 ppid=4357 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:37.843000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 04:11:37.887769 systemd[1]: Started cri-containerd-f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f.scope - libcontainer container f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f. Dec 16 04:11:37.923000 audit: BPF prog-id=225 op=LOAD Dec 16 04:11:37.925000 audit: BPF prog-id=226 op=LOAD Dec 16 04:11:37.925000 audit[4776]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4765 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:37.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639363435306635316338396532373738333132333438636337363536 Dec 16 04:11:37.926000 audit: BPF prog-id=226 op=UNLOAD Dec 16 04:11:37.926000 audit[4776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4765 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:37.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639363435306635316338396532373738333132333438636337363536 Dec 16 04:11:37.926000 audit: BPF prog-id=227 op=LOAD Dec 16 04:11:37.926000 audit[4776]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4765 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:37.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639363435306635316338396532373738333132333438636337363536 Dec 16 04:11:37.927000 audit: BPF prog-id=228 op=LOAD Dec 16 04:11:37.927000 audit[4776]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4765 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:37.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639363435306635316338396532373738333132333438636337363536 Dec 16 04:11:37.928000 audit: BPF prog-id=228 op=UNLOAD Dec 16 04:11:37.928000 audit[4776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4765 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:37.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639363435306635316338396532373738333132333438636337363536 Dec 16 04:11:37.928000 audit: BPF prog-id=227 op=UNLOAD Dec 16 04:11:37.928000 audit[4776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4765 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:37.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639363435306635316338396532373738333132333438636337363536 Dec 16 04:11:37.928000 audit: BPF prog-id=229 op=LOAD Dec 16 04:11:37.928000 audit[4776]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4765 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:37.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639363435306635316338396532373738333132333438636337363536 Dec 16 04:11:38.014345 containerd[1631]: time="2025-12-16T04:11:38.014164843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-knj9k,Uid:d53735f5-b411-4ae8-af6c-2e5709e7684a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f\"" Dec 16 04:11:38.046226 containerd[1631]: time="2025-12-16T04:11:38.045998991Z" level=info msg="CreateContainer within sandbox \"f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 04:11:38.074404 containerd[1631]: time="2025-12-16T04:11:38.070800919Z" level=info msg="Container 187a07a0c44a68a66627309dbd3637392486f605988dc52459d9ecf5828b1c7a: CDI devices from CRI Config.CDIDevices: []" Dec 16 04:11:38.075548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3409431589.mount: Deactivated successfully. Dec 16 04:11:38.087192 containerd[1631]: time="2025-12-16T04:11:38.087095010Z" level=info msg="CreateContainer within sandbox \"f96450f51c89e2778312348cc7656c4544643039276950181ae588de520a821f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"187a07a0c44a68a66627309dbd3637392486f605988dc52459d9ecf5828b1c7a\"" Dec 16 04:11:38.087724 systemd-networkd[1555]: cali018b7510a10: Gained IPv6LL Dec 16 04:11:38.090340 containerd[1631]: time="2025-12-16T04:11:38.090303949Z" level=info msg="StartContainer for \"187a07a0c44a68a66627309dbd3637392486f605988dc52459d9ecf5828b1c7a\"" Dec 16 04:11:38.093790 containerd[1631]: time="2025-12-16T04:11:38.093735800Z" level=info msg="connecting to shim 187a07a0c44a68a66627309dbd3637392486f605988dc52459d9ecf5828b1c7a" address="unix:///run/containerd/s/4271d82ff3c990ca638b92a00fee41062bb156b3f706a06ba0c12c34a26b1af4" protocol=ttrpc version=3 Dec 16 04:11:38.136697 systemd[1]: Started cri-containerd-187a07a0c44a68a66627309dbd3637392486f605988dc52459d9ecf5828b1c7a.scope - libcontainer container 187a07a0c44a68a66627309dbd3637392486f605988dc52459d9ecf5828b1c7a. Dec 16 04:11:38.162822 kubelet[2982]: E1216 04:11:38.162692 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6665678475-rs6tq" podUID="698ea2f4-6c38-4f29-af10-d89d447f19d4" Dec 16 04:11:38.214000 audit: BPF prog-id=230 op=LOAD Dec 16 04:11:38.214000 audit: BPF prog-id=231 op=LOAD Dec 16 04:11:38.214000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4765 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:38.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138376130376130633434613638613636363237333039646264333633 Dec 16 04:11:38.215000 audit: BPF prog-id=231 op=UNLOAD Dec 16 04:11:38.215000 audit[4804]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4765 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:38.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138376130376130633434613638613636363237333039646264333633 Dec 16 04:11:38.215000 audit: BPF prog-id=232 op=LOAD Dec 16 04:11:38.215000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4765 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:38.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138376130376130633434613638613636363237333039646264333633 Dec 16 04:11:38.215000 audit: BPF prog-id=233 op=LOAD Dec 16 04:11:38.215000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4765 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:38.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138376130376130633434613638613636363237333039646264333633 Dec 16 04:11:38.215000 audit: BPF prog-id=233 op=UNLOAD Dec 16 04:11:38.215000 audit[4804]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4765 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:38.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138376130376130633434613638613636363237333039646264333633 Dec 16 04:11:38.215000 audit: BPF prog-id=232 op=UNLOAD Dec 16 04:11:38.215000 audit[4804]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4765 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:38.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138376130376130633434613638613636363237333039646264333633 Dec 16 04:11:38.215000 audit: BPF prog-id=234 op=LOAD Dec 16 04:11:38.215000 audit[4804]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4765 pid=4804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:38.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138376130376130633434613638613636363237333039646264333633 Dec 16 04:11:38.245882 containerd[1631]: time="2025-12-16T04:11:38.245811483Z" level=info msg="StartContainer for \"187a07a0c44a68a66627309dbd3637392486f605988dc52459d9ecf5828b1c7a\" returns successfully" Dec 16 04:11:38.494090 containerd[1631]: time="2025-12-16T04:11:38.494017495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cd92j,Uid:d6d2249c-912c-448c-8aa3-089c6b8243d1,Namespace:calico-system,Attempt:0,}" Dec 16 04:11:38.494090 containerd[1631]: time="2025-12-16T04:11:38.494155381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fjqnd,Uid:9f394ecb-5814-4876-9d24-cba0fe4360b7,Namespace:kube-system,Attempt:0,}" Dec 16 04:11:38.494875 containerd[1631]: time="2025-12-16T04:11:38.494782967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5887594b4-svthm,Uid:ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1,Namespace:calico-apiserver,Attempt:0,}" Dec 16 04:11:38.875943 systemd-networkd[1555]: calif2e10f96727: Link UP Dec 16 04:11:38.889789 systemd-networkd[1555]: calif2e10f96727: Gained carrier Dec 16 04:11:38.920752 systemd-networkd[1555]: cali5d2ebc25b3d: Gained IPv6LL Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.662 [INFO][4839] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cuii1.gb1.brightbox.com-k8s-csi--node--driver--cd92j-eth0 csi-node-driver- calico-system d6d2249c-912c-448c-8aa3-089c6b8243d1 692 0 2025-12-16 04:10:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-cuii1.gb1.brightbox.com csi-node-driver-cd92j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif2e10f96727 [] [] }} ContainerID="68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" Namespace="calico-system" Pod="csi-node-driver-cd92j" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-csi--node--driver--cd92j-" Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.664 [INFO][4839] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" Namespace="calico-system" Pod="csi-node-driver-cd92j" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-csi--node--driver--cd92j-eth0" Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.753 [INFO][4883] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" HandleID="k8s-pod-network.68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" Workload="srv--cuii1.gb1.brightbox.com-k8s-csi--node--driver--cd92j-eth0" Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.753 [INFO][4883] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" HandleID="k8s-pod-network.68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" Workload="srv--cuii1.gb1.brightbox.com-k8s-csi--node--driver--cd92j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8b0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-cuii1.gb1.brightbox.com", "pod":"csi-node-driver-cd92j", "timestamp":"2025-12-16 04:11:38.753249738 +0000 UTC"}, Hostname:"srv-cuii1.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.754 [INFO][4883] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.754 [INFO][4883] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.754 [INFO][4883] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cuii1.gb1.brightbox.com' Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.768 [INFO][4883] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.779 [INFO][4883] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.790 [INFO][4883] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.794 [INFO][4883] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.801 [INFO][4883] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.801 [INFO][4883] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.804 [INFO][4883] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26 Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.819 [INFO][4883] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.845 [INFO][4883] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.197/26] block=192.168.103.192/26 handle="k8s-pod-network.68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.845 [INFO][4883] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.197/26] handle="k8s-pod-network.68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.845 [INFO][4883] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 04:11:38.936776 containerd[1631]: 2025-12-16 04:11:38.845 [INFO][4883] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.197/26] IPv6=[] ContainerID="68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" HandleID="k8s-pod-network.68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" Workload="srv--cuii1.gb1.brightbox.com-k8s-csi--node--driver--cd92j-eth0" Dec 16 04:11:38.941158 containerd[1631]: 2025-12-16 04:11:38.857 [INFO][4839] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" Namespace="calico-system" Pod="csi-node-driver-cd92j" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-csi--node--driver--cd92j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-csi--node--driver--cd92j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d6d2249c-912c-448c-8aa3-089c6b8243d1", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 10, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-cd92j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2e10f96727", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:38.941158 containerd[1631]: 2025-12-16 04:11:38.858 [INFO][4839] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.197/32] ContainerID="68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" Namespace="calico-system" Pod="csi-node-driver-cd92j" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-csi--node--driver--cd92j-eth0" Dec 16 04:11:38.941158 containerd[1631]: 2025-12-16 04:11:38.858 [INFO][4839] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2e10f96727 ContainerID="68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" Namespace="calico-system" Pod="csi-node-driver-cd92j" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-csi--node--driver--cd92j-eth0" Dec 16 04:11:38.941158 containerd[1631]: 2025-12-16 04:11:38.889 [INFO][4839] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" Namespace="calico-system" Pod="csi-node-driver-cd92j" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-csi--node--driver--cd92j-eth0" Dec 16 04:11:38.941158 containerd[1631]: 2025-12-16 04:11:38.897 [INFO][4839] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" Namespace="calico-system" Pod="csi-node-driver-cd92j" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-csi--node--driver--cd92j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-csi--node--driver--cd92j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d6d2249c-912c-448c-8aa3-089c6b8243d1", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 10, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26", Pod:"csi-node-driver-cd92j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2e10f96727", MAC:"1a:df:f2:31:f0:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:38.941158 containerd[1631]: 2025-12-16 04:11:38.928 [INFO][4839] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" Namespace="calico-system" Pod="csi-node-driver-cd92j" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-csi--node--driver--cd92j-eth0" Dec 16 04:11:39.020182 containerd[1631]: time="2025-12-16T04:11:39.019675776Z" level=info msg="connecting to shim 68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26" address="unix:///run/containerd/s/8f1f1b575c9061be36aa79caa883716603b939371990b5277ce4e736747c87d2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 04:11:39.058732 systemd-networkd[1555]: calic9cc920f910: Link UP Dec 16 04:11:39.060319 systemd-networkd[1555]: calic9cc920f910: Gained carrier Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:38.643 [INFO][4844] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--fjqnd-eth0 coredns-668d6bf9bc- kube-system 9f394ecb-5814-4876-9d24-cba0fe4360b7 825 0 2025-12-16 04:10:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-cuii1.gb1.brightbox.com coredns-668d6bf9bc-fjqnd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic9cc920f910 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjqnd" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--fjqnd-" Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:38.643 [INFO][4844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjqnd" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--fjqnd-eth0" Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:38.799 [INFO][4878] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" HandleID="k8s-pod-network.6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" Workload="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--fjqnd-eth0" Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:38.800 [INFO][4878] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" HandleID="k8s-pod-network.6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" Workload="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--fjqnd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a1530), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-cuii1.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-fjqnd", "timestamp":"2025-12-16 04:11:38.799681335 +0000 UTC"}, Hostname:"srv-cuii1.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:38.800 [INFO][4878] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:38.847 [INFO][4878] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:38.849 [INFO][4878] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cuii1.gb1.brightbox.com' Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:38.910 [INFO][4878] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:38.937 [INFO][4878] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:38.960 [INFO][4878] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:38.966 [INFO][4878] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:38.979 [INFO][4878] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:38.979 [INFO][4878] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:38.982 [INFO][4878] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9 Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:39.015 [INFO][4878] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:39.032 [INFO][4878] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.198/26] block=192.168.103.192/26 handle="k8s-pod-network.6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:39.032 [INFO][4878] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.198/26] handle="k8s-pod-network.6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:39.033 [INFO][4878] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 04:11:39.114893 containerd[1631]: 2025-12-16 04:11:39.033 [INFO][4878] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.198/26] IPv6=[] ContainerID="6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" HandleID="k8s-pod-network.6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" Workload="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--fjqnd-eth0" Dec 16 04:11:39.118157 containerd[1631]: 2025-12-16 04:11:39.047 [INFO][4844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjqnd" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--fjqnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--fjqnd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9f394ecb-5814-4876-9d24-cba0fe4360b7", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 10, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-fjqnd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9cc920f910", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:39.118157 containerd[1631]: 2025-12-16 04:11:39.047 [INFO][4844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.198/32] ContainerID="6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjqnd" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--fjqnd-eth0" Dec 16 04:11:39.118157 containerd[1631]: 2025-12-16 04:11:39.047 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9cc920f910 ContainerID="6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjqnd" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--fjqnd-eth0" Dec 16 04:11:39.118157 containerd[1631]: 2025-12-16 04:11:39.058 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjqnd" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--fjqnd-eth0" Dec 16 04:11:39.118157 containerd[1631]: 2025-12-16 04:11:39.061 [INFO][4844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjqnd" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--fjqnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--fjqnd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9f394ecb-5814-4876-9d24-cba0fe4360b7", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 10, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9", Pod:"coredns-668d6bf9bc-fjqnd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9cc920f910", MAC:"7e:de:d4:25:12:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:39.118157 containerd[1631]: 2025-12-16 04:11:39.108 [INFO][4844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-fjqnd" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-coredns--668d6bf9bc--fjqnd-eth0" Dec 16 04:11:39.222031 systemd[1]: Started cri-containerd-68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26.scope - libcontainer container 68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26. Dec 16 04:11:39.246000 audit[4951]: NETFILTER_CFG table=filter:132 family=2 entries=48 op=nft_register_chain pid=4951 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 04:11:39.246000 audit[4951]: SYSCALL arch=c000003e syscall=46 success=yes exit=23140 a0=3 a1=7fffb80c4bc0 a2=0 a3=7fffb80c4bac items=0 ppid=4357 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.246000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 04:11:39.260512 systemd-networkd[1555]: calide9716b54cb: Link UP Dec 16 04:11:39.262591 systemd-networkd[1555]: calide9716b54cb: Gained carrier Dec 16 04:11:39.285970 containerd[1631]: time="2025-12-16T04:11:39.285831621Z" level=info msg="connecting to shim 6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9" address="unix:///run/containerd/s/4eaea3ab7b914d3ecd90fc38477e172e76bb3ad76da21baa78c52ccab13a1638" namespace=k8s.io protocol=ttrpc version=3 Dec 16 04:11:39.305499 kubelet[2982]: I1216 04:11:39.301492 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-knj9k" podStartSLOduration=67.295875451 podStartE2EDuration="1m7.295875451s" podCreationTimestamp="2025-12-16 04:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 04:11:39.294074983 +0000 UTC m=+74.005817626" watchObservedRunningTime="2025-12-16 04:11:39.295875451 +0000 UTC m=+74.007618066" Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:38.698 [INFO][4858] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--svthm-eth0 calico-apiserver-5887594b4- calico-apiserver ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1 824 0 2025-12-16 04:10:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5887594b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-cuii1.gb1.brightbox.com calico-apiserver-5887594b4-svthm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calide9716b54cb [] [] }} ContainerID="297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-svthm" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--svthm-" Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:38.699 [INFO][4858] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-svthm" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--svthm-eth0" Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:38.824 [INFO][4889] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" HandleID="k8s-pod-network.297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" Workload="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--svthm-eth0" Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:38.825 [INFO][4889] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" HandleID="k8s-pod-network.297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" Workload="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--svthm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-cuii1.gb1.brightbox.com", "pod":"calico-apiserver-5887594b4-svthm", "timestamp":"2025-12-16 04:11:38.82437268 +0000 UTC"}, Hostname:"srv-cuii1.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:38.825 [INFO][4889] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:39.033 [INFO][4889] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:39.033 [INFO][4889] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cuii1.gb1.brightbox.com' Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:39.093 [INFO][4889] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:39.120 [INFO][4889] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:39.130 [INFO][4889] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:39.136 [INFO][4889] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:39.146 [INFO][4889] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:39.146 [INFO][4889] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:39.152 [INFO][4889] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:39.176 [INFO][4889] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:39.192 [INFO][4889] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.199/26] block=192.168.103.192/26 handle="k8s-pod-network.297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:39.193 [INFO][4889] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.199/26] handle="k8s-pod-network.297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:39.193 [INFO][4889] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 04:11:39.311422 containerd[1631]: 2025-12-16 04:11:39.193 [INFO][4889] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.199/26] IPv6=[] ContainerID="297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" HandleID="k8s-pod-network.297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" Workload="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--svthm-eth0" Dec 16 04:11:39.314306 containerd[1631]: 2025-12-16 04:11:39.242 [INFO][4858] cni-plugin/k8s.go 418: Populated endpoint ContainerID="297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-svthm" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--svthm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--svthm-eth0", GenerateName:"calico-apiserver-5887594b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 10, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5887594b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-5887594b4-svthm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide9716b54cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:39.314306 containerd[1631]: 2025-12-16 04:11:39.244 [INFO][4858] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.199/32] ContainerID="297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-svthm" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--svthm-eth0" Dec 16 04:11:39.314306 containerd[1631]: 2025-12-16 04:11:39.244 [INFO][4858] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide9716b54cb ContainerID="297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-svthm" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--svthm-eth0" Dec 16 04:11:39.314306 containerd[1631]: 2025-12-16 04:11:39.264 [INFO][4858] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-svthm" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--svthm-eth0" Dec 16 04:11:39.314306 containerd[1631]: 2025-12-16 04:11:39.268 [INFO][4858] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-svthm" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--svthm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--svthm-eth0", GenerateName:"calico-apiserver-5887594b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 10, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5887594b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d", Pod:"calico-apiserver-5887594b4-svthm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide9716b54cb", MAC:"d2:51:d2:14:3e:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:39.314306 containerd[1631]: 2025-12-16 04:11:39.289 [INFO][4858] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" Namespace="calico-apiserver" Pod="calico-apiserver-5887594b4-svthm" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-calico--apiserver--5887594b4--svthm-eth0" Dec 16 04:11:39.355000 audit[4982]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=4982 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:39.355000 audit[4982]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe51d78140 a2=0 a3=7ffe51d7812c items=0 ppid=3120 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.355000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:39.361000 audit[4982]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=4982 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:39.361000 audit[4982]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe51d78140 a2=0 a3=0 items=0 ppid=3120 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.361000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:39.385000 audit: BPF prog-id=235 op=LOAD Dec 16 04:11:39.387000 audit: BPF prog-id=236 op=LOAD Dec 16 04:11:39.387000 audit[4927]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638656637383730636537383463323637623835633637383365303837 Dec 16 04:11:39.387000 audit: BPF prog-id=236 op=UNLOAD Dec 16 04:11:39.387000 audit[4927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638656637383730636537383463323637623835633637383365303837 Dec 16 04:11:39.387000 audit: BPF prog-id=237 op=LOAD Dec 16 04:11:39.387000 audit[4927]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638656637383730636537383463323637623835633637383365303837 Dec 16 04:11:39.388000 audit: BPF prog-id=238 op=LOAD Dec 16 04:11:39.388000 audit[4927]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638656637383730636537383463323637623835633637383365303837 Dec 16 04:11:39.388000 audit: BPF prog-id=238 op=UNLOAD Dec 16 04:11:39.388000 audit[4927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638656637383730636537383463323637623835633637383365303837 Dec 16 04:11:39.388000 audit: BPF prog-id=237 op=UNLOAD Dec 16 04:11:39.388000 audit[4927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638656637383730636537383463323637623835633637383365303837 Dec 16 04:11:39.388000 audit: BPF prog-id=239 op=LOAD Dec 16 04:11:39.388000 audit[4927]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638656637383730636537383463323637623835633637383365303837 Dec 16 04:11:39.412319 systemd[1]: Started cri-containerd-6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9.scope - libcontainer container 6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9. Dec 16 04:11:39.421473 containerd[1631]: time="2025-12-16T04:11:39.421349836Z" level=info msg="connecting to shim 297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d" address="unix:///run/containerd/s/4f8af8ec1ea6507a8f2fb938e0d42398ff3d4742c7e4aaf04859b67f7a2b1674" namespace=k8s.io protocol=ttrpc version=3 Dec 16 04:11:39.498298 containerd[1631]: time="2025-12-16T04:11:39.496926427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s8rh7,Uid:bad388bf-fcef-4c56-88ec-bd97ca364c03,Namespace:calico-system,Attempt:0,}" Dec 16 04:11:39.505734 systemd[1]: Started cri-containerd-297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d.scope - libcontainer container 297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d. Dec 16 04:11:39.514000 audit: BPF prog-id=240 op=LOAD Dec 16 04:11:39.523000 audit: BPF prog-id=241 op=LOAD Dec 16 04:11:39.523000 audit[4981]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4966 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661366662303464316633666433303161626633336530663030396233 Dec 16 04:11:39.523000 audit: BPF prog-id=241 op=UNLOAD Dec 16 04:11:39.523000 audit[4981]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4966 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661366662303464316633666433303161626633336530663030396233 Dec 16 04:11:39.523000 audit: BPF prog-id=242 op=LOAD Dec 16 04:11:39.523000 audit[4981]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4966 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661366662303464316633666433303161626633336530663030396233 Dec 16 04:11:39.524000 audit: BPF prog-id=243 op=LOAD Dec 16 04:11:39.524000 audit[4981]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4966 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661366662303464316633666433303161626633336530663030396233 Dec 16 04:11:39.524000 audit: BPF prog-id=243 op=UNLOAD Dec 16 04:11:39.524000 audit[4981]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4966 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661366662303464316633666433303161626633336530663030396233 Dec 16 04:11:39.524000 audit: BPF prog-id=242 op=UNLOAD Dec 16 04:11:39.524000 audit[4981]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4966 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661366662303464316633666433303161626633336530663030396233 Dec 16 04:11:39.524000 audit: BPF prog-id=244 op=LOAD Dec 16 04:11:39.524000 audit[4981]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4966 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661366662303464316633666433303161626633336530663030396233 Dec 16 04:11:39.528000 audit[5037]: NETFILTER_CFG table=filter:135 family=2 entries=17 op=nft_register_rule pid=5037 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:39.528000 audit[5037]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffec0dd57e0 a2=0 a3=7ffec0dd57cc items=0 ppid=3120 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.528000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:39.541000 audit[5037]: NETFILTER_CFG table=nat:136 family=2 entries=35 op=nft_register_chain pid=5037 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:39.541000 audit[5037]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffec0dd57e0 a2=0 a3=7ffec0dd57cc items=0 ppid=3120 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.541000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:39.550000 audit[5039]: NETFILTER_CFG table=filter:137 family=2 entries=48 op=nft_register_chain pid=5039 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 04:11:39.550000 audit[5039]: SYSCALL arch=c000003e syscall=46 success=yes exit=22720 a0=3 a1=7ffecf067e40 a2=0 a3=7ffecf067e2c items=0 ppid=4357 pid=5039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.550000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 04:11:39.590352 containerd[1631]: time="2025-12-16T04:11:39.590296184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cd92j,Uid:d6d2249c-912c-448c-8aa3-089c6b8243d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"68ef7870ce784c267b85c6783e087d7680d2bae6aebf2c5c827d2bc9ca447d26\"" Dec 16 04:11:39.596573 containerd[1631]: time="2025-12-16T04:11:39.596532177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 04:11:39.649408 containerd[1631]: time="2025-12-16T04:11:39.649319319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fjqnd,Uid:9f394ecb-5814-4876-9d24-cba0fe4360b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9\"" Dec 16 04:11:39.658929 containerd[1631]: time="2025-12-16T04:11:39.658286120Z" level=info msg="CreateContainer within sandbox \"6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 04:11:39.668000 audit: BPF prog-id=245 op=LOAD Dec 16 04:11:39.672000 audit: BPF prog-id=246 op=LOAD Dec 16 04:11:39.672000 audit[5024]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5006 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373639386139666565353138376436343763333830336134313835 Dec 16 04:11:39.674000 audit: BPF prog-id=246 op=UNLOAD Dec 16 04:11:39.674000 audit[5024]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5006 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373639386139666565353138376436343763333830336134313835 Dec 16 04:11:39.674000 audit: BPF prog-id=247 op=LOAD Dec 16 04:11:39.674000 audit[5024]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5006 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373639386139666565353138376436343763333830336134313835 Dec 16 04:11:39.677000 audit: BPF prog-id=248 op=LOAD Dec 16 04:11:39.677000 audit[5024]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5006 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373639386139666565353138376436343763333830336134313835 Dec 16 04:11:39.679000 audit: BPF prog-id=248 op=UNLOAD Dec 16 04:11:39.679000 audit[5024]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5006 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373639386139666565353138376436343763333830336134313835 Dec 16 04:11:39.679000 audit: BPF prog-id=247 op=UNLOAD Dec 16 04:11:39.679000 audit[5024]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5006 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373639386139666565353138376436343763333830336134313835 Dec 16 04:11:39.680000 audit: BPF prog-id=249 op=LOAD Dec 16 04:11:39.680000 audit[5024]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5006 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373639386139666565353138376436343763333830336134313835 Dec 16 04:11:39.703592 containerd[1631]: time="2025-12-16T04:11:39.703540162Z" level=info msg="Container cc1dd72922a6a0ab82109001307def6a5cbc840455868367da63be94d3ed20c3: CDI devices from CRI Config.CDIDevices: []" Dec 16 04:11:39.714973 containerd[1631]: time="2025-12-16T04:11:39.714908517Z" level=info msg="CreateContainer within sandbox \"6a6fb04d1f3fd301abf33e0f009b38ca4f82f35ebcefd0d78618b0a2bbbb26d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc1dd72922a6a0ab82109001307def6a5cbc840455868367da63be94d3ed20c3\"" Dec 16 04:11:39.717189 containerd[1631]: time="2025-12-16T04:11:39.717013944Z" level=info msg="StartContainer for \"cc1dd72922a6a0ab82109001307def6a5cbc840455868367da63be94d3ed20c3\"" Dec 16 04:11:39.714000 audit[5075]: NETFILTER_CFG table=filter:138 family=2 entries=57 op=nft_register_chain pid=5075 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 04:11:39.714000 audit[5075]: SYSCALL arch=c000003e syscall=46 success=yes exit=27828 a0=3 a1=7ffedbc9b5d0 a2=0 a3=7ffedbc9b5bc items=0 ppid=4357 pid=5075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.714000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 04:11:39.723543 containerd[1631]: time="2025-12-16T04:11:39.723430595Z" level=info msg="connecting to shim cc1dd72922a6a0ab82109001307def6a5cbc840455868367da63be94d3ed20c3" address="unix:///run/containerd/s/4eaea3ab7b914d3ecd90fc38477e172e76bb3ad76da21baa78c52ccab13a1638" protocol=ttrpc version=3 Dec 16 04:11:39.760649 systemd[1]: Started cri-containerd-cc1dd72922a6a0ab82109001307def6a5cbc840455868367da63be94d3ed20c3.scope - libcontainer container cc1dd72922a6a0ab82109001307def6a5cbc840455868367da63be94d3ed20c3. Dec 16 04:11:39.809000 audit: BPF prog-id=250 op=LOAD Dec 16 04:11:39.811000 audit: BPF prog-id=251 op=LOAD Dec 16 04:11:39.811000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4966 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363316464373239323261366130616238323130393030313330376465 Dec 16 04:11:39.812000 audit: BPF prog-id=251 op=UNLOAD Dec 16 04:11:39.812000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4966 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363316464373239323261366130616238323130393030313330376465 Dec 16 04:11:39.813000 audit: BPF prog-id=252 op=LOAD Dec 16 04:11:39.813000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4966 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363316464373239323261366130616238323130393030313330376465 Dec 16 04:11:39.814000 audit: BPF prog-id=253 op=LOAD Dec 16 04:11:39.814000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4966 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363316464373239323261366130616238323130393030313330376465 Dec 16 04:11:39.814000 audit: BPF prog-id=253 op=UNLOAD Dec 16 04:11:39.814000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4966 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363316464373239323261366130616238323130393030313330376465 Dec 16 04:11:39.814000 audit: BPF prog-id=252 op=UNLOAD Dec 16 04:11:39.814000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4966 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363316464373239323261366130616238323130393030313330376465 Dec 16 04:11:39.816000 audit: BPF prog-id=254 op=LOAD Dec 16 04:11:39.816000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4966 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363316464373239323261366130616238323130393030313330376465 Dec 16 04:11:39.839410 containerd[1631]: time="2025-12-16T04:11:39.837576312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5887594b4-svthm,Uid:ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"297698a9fee5187d647c3803a41858edbcfdda204ce846bd88a569233febe24d\"" Dec 16 04:11:39.872402 systemd-networkd[1555]: cali3f0c1da431b: Link UP Dec 16 04:11:39.873434 systemd-networkd[1555]: cali3f0c1da431b: Gained carrier Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.701 [INFO][5045] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cuii1.gb1.brightbox.com-k8s-goldmane--666569f655--s8rh7-eth0 goldmane-666569f655- calico-system bad388bf-fcef-4c56-88ec-bd97ca364c03 823 0 2025-12-16 04:10:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-cuii1.gb1.brightbox.com goldmane-666569f655-s8rh7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3f0c1da431b [] [] }} ContainerID="d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" Namespace="calico-system" Pod="goldmane-666569f655-s8rh7" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-goldmane--666569f655--s8rh7-" Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.701 [INFO][5045] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" Namespace="calico-system" Pod="goldmane-666569f655-s8rh7" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-goldmane--666569f655--s8rh7-eth0" Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.774 [INFO][5078] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" HandleID="k8s-pod-network.d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" Workload="srv--cuii1.gb1.brightbox.com-k8s-goldmane--666569f655--s8rh7-eth0" Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.775 [INFO][5078] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" HandleID="k8s-pod-network.d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" Workload="srv--cuii1.gb1.brightbox.com-k8s-goldmane--666569f655--s8rh7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bd720), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-cuii1.gb1.brightbox.com", "pod":"goldmane-666569f655-s8rh7", "timestamp":"2025-12-16 04:11:39.774790336 +0000 UTC"}, Hostname:"srv-cuii1.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.775 [INFO][5078] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.775 [INFO][5078] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.776 [INFO][5078] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cuii1.gb1.brightbox.com' Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.789 [INFO][5078] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.806 [INFO][5078] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.816 [INFO][5078] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.819 [INFO][5078] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.824 [INFO][5078] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.824 [INFO][5078] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.829 [INFO][5078] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.839 [INFO][5078] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.858 [INFO][5078] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.200/26] block=192.168.103.192/26 handle="k8s-pod-network.d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.858 [INFO][5078] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.200/26] handle="k8s-pod-network.d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" host="srv-cuii1.gb1.brightbox.com" Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.858 [INFO][5078] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 04:11:39.902228 containerd[1631]: 2025-12-16 04:11:39.858 [INFO][5078] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.200/26] IPv6=[] ContainerID="d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" HandleID="k8s-pod-network.d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" Workload="srv--cuii1.gb1.brightbox.com-k8s-goldmane--666569f655--s8rh7-eth0" Dec 16 04:11:39.906593 containerd[1631]: 2025-12-16 04:11:39.867 [INFO][5045] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" Namespace="calico-system" Pod="goldmane-666569f655-s8rh7" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-goldmane--666569f655--s8rh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-goldmane--666569f655--s8rh7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bad388bf-fcef-4c56-88ec-bd97ca364c03", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 10, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-s8rh7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3f0c1da431b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:39.906593 containerd[1631]: 2025-12-16 04:11:39.867 [INFO][5045] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.200/32] ContainerID="d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" Namespace="calico-system" Pod="goldmane-666569f655-s8rh7" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-goldmane--666569f655--s8rh7-eth0" Dec 16 04:11:39.906593 containerd[1631]: 2025-12-16 04:11:39.867 [INFO][5045] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f0c1da431b ContainerID="d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" Namespace="calico-system" Pod="goldmane-666569f655-s8rh7" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-goldmane--666569f655--s8rh7-eth0" Dec 16 04:11:39.906593 containerd[1631]: 2025-12-16 04:11:39.875 [INFO][5045] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" Namespace="calico-system" Pod="goldmane-666569f655-s8rh7" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-goldmane--666569f655--s8rh7-eth0" Dec 16 04:11:39.906593 containerd[1631]: 2025-12-16 04:11:39.877 [INFO][5045] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" Namespace="calico-system" Pod="goldmane-666569f655-s8rh7" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-goldmane--666569f655--s8rh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cuii1.gb1.brightbox.com-k8s-goldmane--666569f655--s8rh7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bad388bf-fcef-4c56-88ec-bd97ca364c03", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 4, 10, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cuii1.gb1.brightbox.com", ContainerID:"d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de", Pod:"goldmane-666569f655-s8rh7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3f0c1da431b", MAC:"6a:59:80:78:61:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 04:11:39.906593 containerd[1631]: 2025-12-16 04:11:39.894 [INFO][5045] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" Namespace="calico-system" Pod="goldmane-666569f655-s8rh7" WorkloadEndpoint="srv--cuii1.gb1.brightbox.com-k8s-goldmane--666569f655--s8rh7-eth0" Dec 16 04:11:39.940000 audit[5127]: NETFILTER_CFG table=filter:139 family=2 entries=68 op=nft_register_chain pid=5127 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 04:11:39.940000 audit[5127]: SYSCALL arch=c000003e syscall=46 success=yes exit=32308 a0=3 a1=7fff5b6c5a70 a2=0 a3=7fff5b6c5a5c items=0 ppid=4357 pid=5127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:39.940000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 04:11:39.962549 containerd[1631]: time="2025-12-16T04:11:39.962497553Z" level=info msg="StartContainer for \"cc1dd72922a6a0ab82109001307def6a5cbc840455868367da63be94d3ed20c3\" returns successfully" Dec 16 04:11:39.998748 containerd[1631]: time="2025-12-16T04:11:39.998664660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:40.051714 containerd[1631]: time="2025-12-16T04:11:40.051490183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 04:11:40.052255 containerd[1631]: time="2025-12-16T04:11:40.051940977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:40.053219 kubelet[2982]: E1216 04:11:40.052242 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 04:11:40.053523 kubelet[2982]: E1216 04:11:40.053259 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 04:11:40.053944 kubelet[2982]: E1216 04:11:40.053847 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhg5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cd92j_calico-system(d6d2249c-912c-448c-8aa3-089c6b8243d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:40.054667 containerd[1631]: time="2025-12-16T04:11:40.054306742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 04:11:40.111907 containerd[1631]: time="2025-12-16T04:11:40.111071637Z" level=info msg="connecting to shim d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de" address="unix:///run/containerd/s/9a5afd3dcaac3f08fa13b20a1d8a33dcebe1ba4b87823752daa440f7a81c5ffc" namespace=k8s.io protocol=ttrpc version=3 Dec 16 04:11:40.157711 systemd[1]: Started cri-containerd-d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de.scope - libcontainer container d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de. Dec 16 04:11:40.180000 audit: BPF prog-id=255 op=LOAD Dec 16 04:11:40.181000 audit: BPF prog-id=256 op=LOAD Dec 16 04:11:40.181000 audit[5157]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5146 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:40.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434663632393862383837656264663338616334376363356632356664 Dec 16 04:11:40.182000 audit: BPF prog-id=256 op=UNLOAD Dec 16 04:11:40.182000 audit[5157]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5146 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:40.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434663632393862383837656264663338616334376363356632356664 Dec 16 04:11:40.182000 audit: BPF prog-id=257 op=LOAD Dec 16 04:11:40.182000 audit[5157]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5146 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:40.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434663632393862383837656264663338616334376363356632356664 Dec 16 04:11:40.182000 audit: BPF prog-id=258 op=LOAD Dec 16 04:11:40.182000 audit[5157]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5146 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:40.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434663632393862383837656264663338616334376363356632356664 Dec 16 04:11:40.182000 audit: BPF prog-id=258 op=UNLOAD Dec 16 04:11:40.182000 audit[5157]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5146 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:40.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434663632393862383837656264663338616334376363356632356664 Dec 16 04:11:40.182000 audit: BPF prog-id=257 op=UNLOAD Dec 16 04:11:40.182000 audit[5157]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5146 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:40.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434663632393862383837656264663338616334376363356632356664 Dec 16 04:11:40.182000 audit: BPF prog-id=259 op=LOAD Dec 16 04:11:40.182000 audit[5157]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5146 pid=5157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:40.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434663632393862383837656264663338616334376363356632356664 Dec 16 04:11:40.244505 kubelet[2982]: I1216 04:11:40.244366 2982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fjqnd" podStartSLOduration=68.244344382 podStartE2EDuration="1m8.244344382s" podCreationTimestamp="2025-12-16 04:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 04:11:40.224028662 +0000 UTC m=+74.935771295" watchObservedRunningTime="2025-12-16 04:11:40.244344382 +0000 UTC m=+74.956087004" Dec 16 04:11:40.309468 containerd[1631]: time="2025-12-16T04:11:40.309175465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s8rh7,Uid:bad388bf-fcef-4c56-88ec-bd97ca364c03,Namespace:calico-system,Attempt:0,} returns sandbox id \"d4f6298b887ebdf38ac47cc5f25fddaac7bd9efcd6e93286f7909baff2a5d3de\"" Dec 16 04:11:40.391665 systemd-networkd[1555]: calide9716b54cb: Gained IPv6LL Dec 16 04:11:40.427661 containerd[1631]: time="2025-12-16T04:11:40.427596436Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:40.454678 containerd[1631]: time="2025-12-16T04:11:40.454549446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 04:11:40.454959 containerd[1631]: time="2025-12-16T04:11:40.454764242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:40.455046 kubelet[2982]: E1216 04:11:40.454953 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:11:40.455046 kubelet[2982]: E1216 04:11:40.455013 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:11:40.456462 kubelet[2982]: E1216 04:11:40.455360 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqvvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5887594b4-svthm_calico-apiserver(ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:40.456686 containerd[1631]: time="2025-12-16T04:11:40.455464476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 04:11:40.457216 kubelet[2982]: E1216 04:11:40.457174 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-svthm" podUID="ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1" Dec 16 04:11:40.618000 audit[5187]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:40.618000 audit[5187]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdc074e670 a2=0 a3=7ffdc074e65c items=0 ppid=3120 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:40.618000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:40.635000 audit[5187]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=5187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:40.635000 audit[5187]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdc074e670 a2=0 a3=7ffdc074e65c items=0 ppid=3120 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:40.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:40.775659 systemd-networkd[1555]: calif2e10f96727: Gained IPv6LL Dec 16 04:11:40.796950 containerd[1631]: time="2025-12-16T04:11:40.796831187Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:40.801522 containerd[1631]: time="2025-12-16T04:11:40.801454007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 04:11:40.801787 containerd[1631]: time="2025-12-16T04:11:40.801567515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:40.801859 kubelet[2982]: E1216 04:11:40.801793 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 04:11:40.801921 kubelet[2982]: E1216 04:11:40.801858 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 04:11:40.802544 kubelet[2982]: E1216 04:11:40.802461 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhg5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cd92j_calico-system(d6d2249c-912c-448c-8aa3-089c6b8243d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:40.803087 containerd[1631]: time="2025-12-16T04:11:40.803050687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 04:11:40.805037 kubelet[2982]: E1216 04:11:40.804731 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:11:40.839599 systemd-networkd[1555]: calic9cc920f910: Gained IPv6LL Dec 16 04:11:40.967624 systemd-networkd[1555]: cali3f0c1da431b: Gained IPv6LL Dec 16 04:11:41.108997 containerd[1631]: time="2025-12-16T04:11:41.108894722Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:41.151734 containerd[1631]: time="2025-12-16T04:11:41.151668598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:41.213080 containerd[1631]: time="2025-12-16T04:11:41.212905211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 04:11:41.215013 kubelet[2982]: E1216 04:11:41.213975 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 04:11:41.215013 kubelet[2982]: E1216 04:11:41.214788 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 04:11:41.215266 kubelet[2982]: E1216 04:11:41.214972 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmrlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s8rh7_calico-system(bad388bf-fcef-4c56-88ec-bd97ca364c03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:41.215266 kubelet[2982]: E1216 04:11:41.214516 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:11:41.215266 kubelet[2982]: E1216 04:11:41.214596 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-svthm" podUID="ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1" Dec 16 04:11:41.216620 kubelet[2982]: E1216 04:11:41.216241 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s8rh7" podUID="bad388bf-fcef-4c56-88ec-bd97ca364c03" Dec 16 04:11:41.501866 containerd[1631]: time="2025-12-16T04:11:41.501528178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 04:11:41.674261 kernel: kauditd_printk_skb: 233 callbacks suppressed Dec 16 04:11:41.674633 kernel: audit: type=1325 audit(1765858301.665:750): table=filter:142 family=2 entries=14 op=nft_register_rule pid=5190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:41.665000 audit[5190]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:41.665000 audit[5190]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeb2e52f50 a2=0 a3=7ffeb2e52f3c items=0 ppid=3120 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:41.665000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:41.683943 kernel: audit: type=1300 audit(1765858301.665:750): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeb2e52f50 a2=0 a3=7ffeb2e52f3c items=0 ppid=3120 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:41.684053 kernel: audit: type=1327 audit(1765858301.665:750): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:41.678000 audit[5190]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:41.686718 kernel: audit: type=1325 audit(1765858301.678:751): table=nat:143 family=2 entries=20 op=nft_register_rule pid=5190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:41.678000 audit[5190]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeb2e52f50 a2=0 a3=7ffeb2e52f3c items=0 ppid=3120 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:41.690694 kernel: audit: type=1300 audit(1765858301.678:751): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeb2e52f50 a2=0 a3=7ffeb2e52f3c items=0 ppid=3120 pid=5190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:41.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:41.695215 kernel: audit: type=1327 audit(1765858301.678:751): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:41.843907 containerd[1631]: time="2025-12-16T04:11:41.843571534Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:41.880231 containerd[1631]: time="2025-12-16T04:11:41.880044920Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 04:11:41.880601 containerd[1631]: time="2025-12-16T04:11:41.880091041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:41.880896 kubelet[2982]: E1216 04:11:41.880813 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 04:11:41.881634 kubelet[2982]: E1216 04:11:41.880913 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 04:11:41.881634 kubelet[2982]: E1216 04:11:41.881100 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:48980414ea8b4415945e777cac55846b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-df4q7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-756bfb6d7d-p4kdb_calico-system(5542aa1b-8a7b-412d-a408-cf0a80b3e3bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:41.883613 containerd[1631]: time="2025-12-16T04:11:41.883523806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 04:11:42.214144 containerd[1631]: time="2025-12-16T04:11:42.213965559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:42.216939 containerd[1631]: time="2025-12-16T04:11:42.216882856Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 04:11:42.217281 containerd[1631]: time="2025-12-16T04:11:42.217078453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:42.218035 kubelet[2982]: E1216 04:11:42.217714 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 04:11:42.218796 kubelet[2982]: E1216 04:11:42.217904 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 04:11:42.219631 kubelet[2982]: E1216 04:11:42.219156 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-df4q7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-756bfb6d7d-p4kdb_calico-system(5542aa1b-8a7b-412d-a408-cf0a80b3e3bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:42.220961 kubelet[2982]: E1216 04:11:42.220696 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-756bfb6d7d-p4kdb" podUID="5542aa1b-8a7b-412d-a408-cf0a80b3e3bc" Dec 16 04:11:42.224292 kubelet[2982]: E1216 04:11:42.224235 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s8rh7" podUID="bad388bf-fcef-4c56-88ec-bd97ca364c03" Dec 16 04:11:42.707000 audit[5192]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:42.707000 audit[5192]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd015cf70 a2=0 a3=7ffcd015cf5c items=0 ppid=3120 pid=5192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:42.714186 kernel: audit: type=1325 audit(1765858302.707:752): table=filter:144 family=2 entries=14 op=nft_register_rule pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:42.714370 kernel: audit: type=1300 audit(1765858302.707:752): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd015cf70 a2=0 a3=7ffcd015cf5c items=0 ppid=3120 pid=5192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:42.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:42.719404 kernel: audit: type=1327 audit(1765858302.707:752): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:42.718000 audit[5192]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:42.721826 kernel: audit: type=1325 audit(1765858302.718:753): table=nat:145 family=2 entries=20 op=nft_register_rule pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:11:42.718000 audit[5192]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcd015cf70 a2=0 a3=7ffcd015cf5c items=0 ppid=3120 pid=5192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:42.718000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:11:47.495488 containerd[1631]: time="2025-12-16T04:11:47.494802685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 04:11:47.827814 containerd[1631]: time="2025-12-16T04:11:47.827733389Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:47.829041 containerd[1631]: time="2025-12-16T04:11:47.828982445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 04:11:47.829228 containerd[1631]: time="2025-12-16T04:11:47.829006157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:47.829578 kubelet[2982]: E1216 04:11:47.829463 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:11:47.830566 kubelet[2982]: E1216 04:11:47.830238 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:11:47.830566 kubelet[2982]: E1216 04:11:47.830479 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdgpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5887594b4-5fm6l_calico-apiserver(2b2fbc29-627a-4636-910d-2ada1caf4c64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:47.831771 kubelet[2982]: E1216 04:11:47.831676 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" podUID="2b2fbc29-627a-4636-910d-2ada1caf4c64" Dec 16 04:11:51.498606 containerd[1631]: time="2025-12-16T04:11:51.498545957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 04:11:51.813394 containerd[1631]: time="2025-12-16T04:11:51.813046932Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:51.814868 containerd[1631]: time="2025-12-16T04:11:51.814717362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 04:11:51.814868 containerd[1631]: time="2025-12-16T04:11:51.814819375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:51.815087 kubelet[2982]: E1216 04:11:51.815016 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 04:11:51.816411 kubelet[2982]: E1216 04:11:51.815092 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 04:11:51.816411 kubelet[2982]: E1216 04:11:51.815275 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ch2vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6665678475-rs6tq_calico-system(698ea2f4-6c38-4f29-af10-d89d447f19d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:51.816959 kubelet[2982]: E1216 04:11:51.816896 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6665678475-rs6tq" podUID="698ea2f4-6c38-4f29-af10-d89d447f19d4" Dec 16 04:11:53.496971 containerd[1631]: time="2025-12-16T04:11:53.496232757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 04:11:53.823240 containerd[1631]: time="2025-12-16T04:11:53.822837466Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:53.824443 containerd[1631]: time="2025-12-16T04:11:53.824300409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 04:11:53.824443 containerd[1631]: time="2025-12-16T04:11:53.824405600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:53.824654 kubelet[2982]: E1216 04:11:53.824602 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 04:11:53.825608 kubelet[2982]: E1216 04:11:53.824670 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 04:11:53.825608 kubelet[2982]: E1216 04:11:53.824844 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmrlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s8rh7_calico-system(bad388bf-fcef-4c56-88ec-bd97ca364c03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:53.826512 kubelet[2982]: E1216 04:11:53.826458 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s8rh7" podUID="bad388bf-fcef-4c56-88ec-bd97ca364c03" Dec 16 04:11:55.494920 containerd[1631]: time="2025-12-16T04:11:55.494435540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 04:11:55.809527 containerd[1631]: time="2025-12-16T04:11:55.809303010Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:55.810727 containerd[1631]: time="2025-12-16T04:11:55.810680078Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 04:11:55.810848 containerd[1631]: time="2025-12-16T04:11:55.810793489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:55.811157 kubelet[2982]: E1216 04:11:55.811090 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 04:11:55.811756 kubelet[2982]: E1216 04:11:55.811165 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 04:11:55.811756 kubelet[2982]: E1216 04:11:55.811335 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhg5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cd92j_calico-system(d6d2249c-912c-448c-8aa3-089c6b8243d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:55.815125 containerd[1631]: time="2025-12-16T04:11:55.815051235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 04:11:55.933269 systemd[1]: Started sshd@10-10.230.69.46:22-139.178.89.65:59962.service - OpenSSH per-connection server daemon (139.178.89.65:59962). Dec 16 04:11:55.944736 kernel: kauditd_printk_skb: 2 callbacks suppressed Dec 16 04:11:55.945883 kernel: audit: type=1130 audit(1765858315.934:754): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.69.46:22-139.178.89.65:59962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:11:55.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.69.46:22-139.178.89.65:59962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:11:56.132604 containerd[1631]: time="2025-12-16T04:11:56.132479220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:56.134246 containerd[1631]: time="2025-12-16T04:11:56.134190462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:56.134412 containerd[1631]: time="2025-12-16T04:11:56.134231566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 04:11:56.135310 kubelet[2982]: E1216 04:11:56.135179 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 04:11:56.135310 kubelet[2982]: E1216 04:11:56.135274 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 04:11:56.135964 kubelet[2982]: E1216 04:11:56.135612 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhg5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cd92j_calico-system(d6d2249c-912c-448c-8aa3-089c6b8243d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:56.137329 kubelet[2982]: E1216 04:11:56.137259 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:11:56.496534 containerd[1631]: time="2025-12-16T04:11:56.496218424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 04:11:56.499153 kubelet[2982]: E1216 04:11:56.499047 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-756bfb6d7d-p4kdb" podUID="5542aa1b-8a7b-412d-a408-cf0a80b3e3bc" Dec 16 04:11:56.805000 audit[5208]: USER_ACCT pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:11:56.806868 sshd[5208]: Accepted publickey for core from 139.178.89.65 port 59962 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:11:56.810607 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:11:56.807000 audit[5208]: CRED_ACQ pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:11:56.812627 kernel: audit: type=1101 audit(1765858316.805:755): pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:11:56.812729 kernel: audit: type=1103 audit(1765858316.807:756): pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:11:56.816270 containerd[1631]: time="2025-12-16T04:11:56.815941593Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:11:56.826399 kernel: audit: type=1006 audit(1765858316.807:757): pid=5208 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 04:11:56.827509 containerd[1631]: time="2025-12-16T04:11:56.827428280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 04:11:56.827856 containerd[1631]: time="2025-12-16T04:11:56.827500426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 04:11:56.807000 audit[5208]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4694e190 a2=3 a3=0 items=0 ppid=1 pid=5208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:56.833268 kubelet[2982]: E1216 04:11:56.828762 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:11:56.833268 kubelet[2982]: E1216 04:11:56.828997 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:11:56.833268 kubelet[2982]: E1216 04:11:56.831042 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqvvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5887594b4-svthm_calico-apiserver(ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 04:11:56.833268 kubelet[2982]: E1216 04:11:56.832437 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-svthm" podUID="ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1" Dec 16 04:11:56.834995 kernel: audit: type=1300 audit(1765858316.807:757): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4694e190 a2=3 a3=0 items=0 ppid=1 pid=5208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:11:56.807000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:11:56.837551 kernel: audit: type=1327 audit(1765858316.807:757): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:11:56.842517 systemd-logind[1613]: New session 13 of user core. Dec 16 04:11:56.851804 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 04:11:56.861000 audit[5208]: USER_START pid=5208 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:11:56.868414 kernel: audit: type=1105 audit(1765858316.861:758): pid=5208 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:11:56.869000 audit[5212]: CRED_ACQ pid=5212 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:11:56.874428 kernel: audit: type=1103 audit(1765858316.869:759): pid=5212 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:11:57.826110 sshd[5212]: Connection closed by 139.178.89.65 port 59962 Dec 16 04:11:57.827663 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Dec 16 04:11:57.835000 audit[5208]: USER_END pid=5208 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:11:57.850814 kernel: audit: type=1106 audit(1765858317.835:760): pid=5208 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:11:57.835000 audit[5208]: CRED_DISP pid=5208 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:11:57.852904 systemd[1]: sshd@10-10.230.69.46:22-139.178.89.65:59962.service: Deactivated successfully. Dec 16 04:11:57.858785 kernel: audit: type=1104 audit(1765858317.835:761): pid=5208 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:11:57.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.69.46:22-139.178.89.65:59962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:11:57.856224 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 04:11:57.859598 systemd-logind[1613]: Session 13 logged out. Waiting for processes to exit. Dec 16 04:11:57.862433 systemd-logind[1613]: Removed session 13. Dec 16 04:12:01.494254 kubelet[2982]: E1216 04:12:01.494113 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" podUID="2b2fbc29-627a-4636-910d-2ada1caf4c64" Dec 16 04:12:03.009536 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 04:12:03.009912 kernel: audit: type=1130 audit(1765858323.000:763): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.69.46:22-139.178.89.65:36690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:03.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.69.46:22-139.178.89.65:36690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:03.000875 systemd[1]: Started sshd@11-10.230.69.46:22-139.178.89.65:36690.service - OpenSSH per-connection server daemon (139.178.89.65:36690). Dec 16 04:12:03.827000 audit[5252]: USER_ACCT pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:03.828577 sshd[5252]: Accepted publickey for core from 139.178.89.65 port 36690 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:12:03.832554 sshd-session[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:12:03.829000 audit[5252]: CRED_ACQ pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:03.834478 kernel: audit: type=1101 audit(1765858323.827:764): pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:03.834583 kernel: audit: type=1103 audit(1765858323.829:765): pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:03.838677 kernel: audit: type=1006 audit(1765858323.829:766): pid=5252 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 04:12:03.829000 audit[5252]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffff395870 a2=3 a3=0 items=0 ppid=1 pid=5252 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:03.846403 kernel: audit: type=1300 audit(1765858323.829:766): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffff395870 a2=3 a3=0 items=0 ppid=1 pid=5252 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:03.846508 kernel: audit: type=1327 audit(1765858323.829:766): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:03.829000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:03.851657 systemd-logind[1613]: New session 14 of user core. Dec 16 04:12:03.862681 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 04:12:03.867000 audit[5252]: USER_START pid=5252 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:03.870000 audit[5258]: CRED_ACQ pid=5258 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:03.875565 kernel: audit: type=1105 audit(1765858323.867:767): pid=5252 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:03.875670 kernel: audit: type=1103 audit(1765858323.870:768): pid=5258 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:04.401437 sshd[5258]: Connection closed by 139.178.89.65 port 36690 Dec 16 04:12:04.402397 sshd-session[5252]: pam_unix(sshd:session): session closed for user core Dec 16 04:12:04.404000 audit[5252]: USER_END pid=5252 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:04.420787 systemd[1]: sshd@11-10.230.69.46:22-139.178.89.65:36690.service: Deactivated successfully. Dec 16 04:12:04.424836 kernel: audit: type=1106 audit(1765858324.404:769): pid=5252 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:04.424942 kernel: audit: type=1104 audit(1765858324.404:770): pid=5252 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:04.404000 audit[5252]: CRED_DISP pid=5252 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:04.426780 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 04:12:04.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.69.46:22-139.178.89.65:36690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:04.429321 systemd-logind[1613]: Session 14 logged out. Waiting for processes to exit. Dec 16 04:12:04.432704 systemd-logind[1613]: Removed session 14. Dec 16 04:12:06.494998 kubelet[2982]: E1216 04:12:06.494884 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s8rh7" podUID="bad388bf-fcef-4c56-88ec-bd97ca364c03" Dec 16 04:12:06.498545 kubelet[2982]: E1216 04:12:06.496584 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6665678475-rs6tq" podUID="698ea2f4-6c38-4f29-af10-d89d447f19d4" Dec 16 04:12:08.494645 kubelet[2982]: E1216 04:12:08.494563 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-svthm" podUID="ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1" Dec 16 04:12:09.495854 kubelet[2982]: E1216 04:12:09.495705 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:12:09.563667 systemd[1]: Started sshd@12-10.230.69.46:22-139.178.89.65:36700.service - OpenSSH per-connection server daemon (139.178.89.65:36700). Dec 16 04:12:09.573060 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 04:12:09.573205 kernel: audit: type=1130 audit(1765858329.563:772): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.69.46:22-139.178.89.65:36700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:09.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.69.46:22-139.178.89.65:36700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:10.387000 audit[5271]: USER_ACCT pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:10.390541 sshd[5271]: Accepted publickey for core from 139.178.89.65 port 36700 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:12:10.392000 audit[5271]: CRED_ACQ pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:10.394615 kernel: audit: type=1101 audit(1765858330.387:773): pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:10.394744 kernel: audit: type=1103 audit(1765858330.392:774): pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:10.395134 sshd-session[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:12:10.392000 audit[5271]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebdbdd570 a2=3 a3=0 items=0 ppid=1 pid=5271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:10.404129 kernel: audit: type=1006 audit(1765858330.392:775): pid=5271 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 04:12:10.404237 kernel: audit: type=1300 audit(1765858330.392:775): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebdbdd570 a2=3 a3=0 items=0 ppid=1 pid=5271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:10.392000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:10.409499 kernel: audit: type=1327 audit(1765858330.392:775): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:10.409994 systemd-logind[1613]: New session 15 of user core. Dec 16 04:12:10.420665 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 04:12:10.425000 audit[5271]: USER_START pid=5271 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:10.432424 kernel: audit: type=1105 audit(1765858330.425:776): pid=5271 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:10.432595 kernel: audit: type=1103 audit(1765858330.429:777): pid=5283 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:10.429000 audit[5283]: CRED_ACQ pid=5283 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:11.001311 sshd[5283]: Connection closed by 139.178.89.65 port 36700 Dec 16 04:12:11.002311 sshd-session[5271]: pam_unix(sshd:session): session closed for user core Dec 16 04:12:11.007000 audit[5271]: USER_END pid=5271 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:11.013804 systemd[1]: sshd@12-10.230.69.46:22-139.178.89.65:36700.service: Deactivated successfully. Dec 16 04:12:11.015524 kernel: audit: type=1106 audit(1765858331.007:778): pid=5271 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:11.015654 kernel: audit: type=1104 audit(1765858331.008:779): pid=5271 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:11.008000 audit[5271]: CRED_DISP pid=5271 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:11.018358 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 04:12:11.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.69.46:22-139.178.89.65:36700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:11.021845 systemd-logind[1613]: Session 15 logged out. Waiting for processes to exit. Dec 16 04:12:11.025068 systemd-logind[1613]: Removed session 15. Dec 16 04:12:11.496716 containerd[1631]: time="2025-12-16T04:12:11.495993940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 04:12:11.910431 containerd[1631]: time="2025-12-16T04:12:11.910131762Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:12:11.949773 containerd[1631]: time="2025-12-16T04:12:11.949693155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 04:12:11.949773 containerd[1631]: time="2025-12-16T04:12:11.949700766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 04:12:11.951898 kubelet[2982]: E1216 04:12:11.951485 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 04:12:11.951898 kubelet[2982]: E1216 04:12:11.951588 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 04:12:11.951898 kubelet[2982]: E1216 04:12:11.951820 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:48980414ea8b4415945e777cac55846b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-df4q7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-756bfb6d7d-p4kdb_calico-system(5542aa1b-8a7b-412d-a408-cf0a80b3e3bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 04:12:11.955537 containerd[1631]: time="2025-12-16T04:12:11.955242225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 04:12:12.276171 containerd[1631]: time="2025-12-16T04:12:12.275882256Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:12:12.346874 containerd[1631]: time="2025-12-16T04:12:12.346734948Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 04:12:12.347170 containerd[1631]: time="2025-12-16T04:12:12.346768994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 04:12:12.347619 kubelet[2982]: E1216 04:12:12.347556 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 04:12:12.347814 kubelet[2982]: E1216 04:12:12.347752 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 04:12:12.348181 kubelet[2982]: E1216 04:12:12.348081 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-df4q7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-756bfb6d7d-p4kdb_calico-system(5542aa1b-8a7b-412d-a408-cf0a80b3e3bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 04:12:12.349565 kubelet[2982]: E1216 04:12:12.349461 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-756bfb6d7d-p4kdb" podUID="5542aa1b-8a7b-412d-a408-cf0a80b3e3bc" Dec 16 04:12:15.503331 containerd[1631]: time="2025-12-16T04:12:15.502307679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 04:12:15.823213 containerd[1631]: time="2025-12-16T04:12:15.822613704Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:12:15.824639 containerd[1631]: time="2025-12-16T04:12:15.824517831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 04:12:15.824639 containerd[1631]: time="2025-12-16T04:12:15.824595742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 04:12:15.824884 kubelet[2982]: E1216 04:12:15.824808 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:12:15.825778 kubelet[2982]: E1216 04:12:15.824905 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:12:15.825778 kubelet[2982]: E1216 04:12:15.825233 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdgpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5887594b4-5fm6l_calico-apiserver(2b2fbc29-627a-4636-910d-2ada1caf4c64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 04:12:15.827240 kubelet[2982]: E1216 04:12:15.826542 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" podUID="2b2fbc29-627a-4636-910d-2ada1caf4c64" Dec 16 04:12:16.163496 systemd[1]: Started sshd@13-10.230.69.46:22-139.178.89.65:56038.service - OpenSSH per-connection server daemon (139.178.89.65:56038). Dec 16 04:12:16.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.69.46:22-139.178.89.65:56038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:16.175263 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 04:12:16.175347 kernel: audit: type=1130 audit(1765858336.162:781): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.69.46:22-139.178.89.65:56038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:16.174946 systemd[1]: Starting systemd-sysupdate-reboot.service - Reboot Automatically After System Update... Dec 16 04:12:16.199791 systemd-sysupdate[5296]: No transfer definitions found. Dec 16 04:12:16.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysupdate-reboot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 04:12:16.209563 kernel: audit: type=1130 audit(1765858336.203:782): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysupdate-reboot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 04:12:16.202248 systemd[1]: systemd-sysupdate-reboot.service: Main process exited, code=exited, status=1/FAILURE Dec 16 04:12:16.202555 systemd[1]: systemd-sysupdate-reboot.service: Failed with result 'exit-code'. Dec 16 04:12:16.203687 systemd[1]: Failed to start systemd-sysupdate-reboot.service - Reboot Automatically After System Update. Dec 16 04:12:16.971000 audit[5295]: USER_ACCT pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:16.979485 sshd[5295]: Accepted publickey for core from 139.178.89.65 port 56038 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:12:16.978000 audit[5295]: CRED_ACQ pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:16.983605 sshd-session[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:12:16.983955 kernel: audit: type=1101 audit(1765858336.971:783): pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:16.984028 kernel: audit: type=1103 audit(1765858336.978:784): pid=5295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:16.978000 audit[5295]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda29cbe50 a2=3 a3=0 items=0 ppid=1 pid=5295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:16.992424 kernel: audit: type=1006 audit(1765858336.978:785): pid=5295 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 04:12:16.992524 kernel: audit: type=1300 audit(1765858336.978:785): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda29cbe50 a2=3 a3=0 items=0 ppid=1 pid=5295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:16.978000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:17.000416 kernel: audit: type=1327 audit(1765858336.978:785): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:17.013511 systemd-logind[1613]: New session 16 of user core. Dec 16 04:12:17.024718 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 04:12:17.030000 audit[5295]: USER_START pid=5295 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:17.038503 kernel: audit: type=1105 audit(1765858337.030:786): pid=5295 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:17.039489 kernel: audit: type=1103 audit(1765858337.037:787): pid=5301 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:17.037000 audit[5301]: CRED_ACQ pid=5301 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:17.501402 containerd[1631]: time="2025-12-16T04:12:17.501159258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 04:12:17.549451 sshd[5301]: Connection closed by 139.178.89.65 port 56038 Dec 16 04:12:17.552978 sshd-session[5295]: pam_unix(sshd:session): session closed for user core Dec 16 04:12:17.555000 audit[5295]: USER_END pid=5295 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:17.563438 kernel: audit: type=1106 audit(1765858337.555:788): pid=5295 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:17.564396 systemd[1]: sshd@13-10.230.69.46:22-139.178.89.65:56038.service: Deactivated successfully. Dec 16 04:12:17.556000 audit[5295]: CRED_DISP pid=5295 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:17.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.69.46:22-139.178.89.65:56038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:17.569108 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 04:12:17.571406 systemd-logind[1613]: Session 16 logged out. Waiting for processes to exit. Dec 16 04:12:17.575891 systemd-logind[1613]: Removed session 16. Dec 16 04:12:17.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.69.46:22-139.178.89.65:56040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:17.711854 systemd[1]: Started sshd@14-10.230.69.46:22-139.178.89.65:56040.service - OpenSSH per-connection server daemon (139.178.89.65:56040). Dec 16 04:12:17.840293 containerd[1631]: time="2025-12-16T04:12:17.840138465Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:12:17.841787 containerd[1631]: time="2025-12-16T04:12:17.841735168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 04:12:17.841882 containerd[1631]: time="2025-12-16T04:12:17.841847765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 04:12:17.842191 kubelet[2982]: E1216 04:12:17.842067 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 04:12:17.842959 kubelet[2982]: E1216 04:12:17.842285 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 04:12:17.844403 kubelet[2982]: E1216 04:12:17.843308 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ch2vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6665678475-rs6tq_calico-system(698ea2f4-6c38-4f29-af10-d89d447f19d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 04:12:17.844581 containerd[1631]: time="2025-12-16T04:12:17.843892106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 04:12:17.845164 kubelet[2982]: E1216 04:12:17.845111 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6665678475-rs6tq" podUID="698ea2f4-6c38-4f29-af10-d89d447f19d4" Dec 16 04:12:18.151911 containerd[1631]: time="2025-12-16T04:12:18.151280895Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:12:18.152489 containerd[1631]: time="2025-12-16T04:12:18.152446656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 04:12:18.152660 containerd[1631]: time="2025-12-16T04:12:18.152558562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 04:12:18.152997 kubelet[2982]: E1216 04:12:18.152928 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 04:12:18.153087 kubelet[2982]: E1216 04:12:18.152999 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 04:12:18.153352 kubelet[2982]: E1216 04:12:18.153248 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmrlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s8rh7_calico-system(bad388bf-fcef-4c56-88ec-bd97ca364c03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 04:12:18.154925 kubelet[2982]: E1216 04:12:18.154876 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s8rh7" podUID="bad388bf-fcef-4c56-88ec-bd97ca364c03" Dec 16 04:12:18.533000 audit[5315]: USER_ACCT pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:18.535722 sshd[5315]: Accepted publickey for core from 139.178.89.65 port 56040 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:12:18.538166 sshd-session[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:12:18.535000 audit[5315]: CRED_ACQ pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:18.535000 audit[5315]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb006c8d0 a2=3 a3=0 items=0 ppid=1 pid=5315 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:18.535000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:18.551051 systemd-logind[1613]: New session 17 of user core. Dec 16 04:12:18.561704 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 04:12:18.566000 audit[5315]: USER_START pid=5315 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:18.569000 audit[5319]: CRED_ACQ pid=5319 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:19.221121 sshd[5319]: Connection closed by 139.178.89.65 port 56040 Dec 16 04:12:19.222338 sshd-session[5315]: pam_unix(sshd:session): session closed for user core Dec 16 04:12:19.227000 audit[5315]: USER_END pid=5315 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:19.229000 audit[5315]: CRED_DISP pid=5315 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:19.236992 systemd-logind[1613]: Session 17 logged out. Waiting for processes to exit. Dec 16 04:12:19.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.69.46:22-139.178.89.65:56040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:19.237150 systemd[1]: sshd@14-10.230.69.46:22-139.178.89.65:56040.service: Deactivated successfully. Dec 16 04:12:19.240081 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 04:12:19.247515 systemd-logind[1613]: Removed session 17. Dec 16 04:12:19.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.69.46:22-139.178.89.65:56056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:19.381785 systemd[1]: Started sshd@15-10.230.69.46:22-139.178.89.65:56056.service - OpenSSH per-connection server daemon (139.178.89.65:56056). Dec 16 04:12:20.190000 audit[5328]: USER_ACCT pid=5328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:20.192079 sshd[5328]: Accepted publickey for core from 139.178.89.65 port 56056 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:12:20.192000 audit[5328]: CRED_ACQ pid=5328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:20.193000 audit[5328]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2f78cc30 a2=3 a3=0 items=0 ppid=1 pid=5328 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:20.193000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:20.195633 sshd-session[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:12:20.209041 systemd-logind[1613]: New session 18 of user core. Dec 16 04:12:20.212638 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 04:12:20.221000 audit[5328]: USER_START pid=5328 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:20.225000 audit[5334]: CRED_ACQ pid=5334 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:20.749094 sshd[5334]: Connection closed by 139.178.89.65 port 56056 Dec 16 04:12:20.749997 sshd-session[5328]: pam_unix(sshd:session): session closed for user core Dec 16 04:12:20.750000 audit[5328]: USER_END pid=5328 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:20.751000 audit[5328]: CRED_DISP pid=5328 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:20.755629 systemd-logind[1613]: Session 18 logged out. Waiting for processes to exit. Dec 16 04:12:20.756090 systemd[1]: sshd@15-10.230.69.46:22-139.178.89.65:56056.service: Deactivated successfully. Dec 16 04:12:20.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.69.46:22-139.178.89.65:56056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:20.759173 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 04:12:20.762878 systemd-logind[1613]: Removed session 18. Dec 16 04:12:21.495522 containerd[1631]: time="2025-12-16T04:12:21.495284677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 04:12:21.846740 containerd[1631]: time="2025-12-16T04:12:21.846661172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:12:21.848069 containerd[1631]: time="2025-12-16T04:12:21.848012262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 04:12:21.848421 containerd[1631]: time="2025-12-16T04:12:21.848123114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 04:12:21.848517 kubelet[2982]: E1216 04:12:21.848307 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:12:21.849243 kubelet[2982]: E1216 04:12:21.848370 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:12:21.849243 kubelet[2982]: E1216 04:12:21.849153 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqvvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5887594b4-svthm_calico-apiserver(ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 04:12:21.850819 kubelet[2982]: E1216 04:12:21.850759 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-svthm" podUID="ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1" Dec 16 04:12:22.494815 kubelet[2982]: E1216 04:12:22.494669 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-756bfb6d7d-p4kdb" podUID="5542aa1b-8a7b-412d-a408-cf0a80b3e3bc" Dec 16 04:12:24.495521 containerd[1631]: time="2025-12-16T04:12:24.494866137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 04:12:24.841594 containerd[1631]: time="2025-12-16T04:12:24.841450557Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:12:24.874051 containerd[1631]: time="2025-12-16T04:12:24.873909144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 04:12:24.874298 containerd[1631]: time="2025-12-16T04:12:24.873948976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 04:12:24.874523 kubelet[2982]: E1216 04:12:24.874456 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 04:12:24.875495 kubelet[2982]: E1216 04:12:24.874554 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 04:12:24.875495 kubelet[2982]: E1216 04:12:24.874873 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhg5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cd92j_calico-system(d6d2249c-912c-448c-8aa3-089c6b8243d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 04:12:24.878758 containerd[1631]: time="2025-12-16T04:12:24.878662668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 04:12:25.227763 containerd[1631]: time="2025-12-16T04:12:25.227577140Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:12:25.229230 containerd[1631]: time="2025-12-16T04:12:25.229075520Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 04:12:25.229230 containerd[1631]: time="2025-12-16T04:12:25.229101923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 04:12:25.229491 kubelet[2982]: E1216 04:12:25.229431 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 04:12:25.229598 kubelet[2982]: E1216 04:12:25.229508 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 04:12:25.229807 kubelet[2982]: E1216 04:12:25.229705 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhg5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-cd92j_calico-system(d6d2249c-912c-448c-8aa3-089c6b8243d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 04:12:25.231295 kubelet[2982]: E1216 04:12:25.231249 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:12:25.928447 kernel: kauditd_printk_skb: 24 callbacks suppressed Dec 16 04:12:25.928702 kernel: audit: type=1130 audit(1765858345.914:809): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.69.46:22-139.178.89.65:54456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:25.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.69.46:22-139.178.89.65:54456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:25.915829 systemd[1]: Started sshd@16-10.230.69.46:22-139.178.89.65:54456.service - OpenSSH per-connection server daemon (139.178.89.65:54456). Dec 16 04:12:26.735000 audit[5355]: USER_ACCT pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:26.741784 sshd[5355]: Accepted publickey for core from 139.178.89.65 port 54456 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:12:26.745843 sshd-session[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:12:26.746450 kernel: audit: type=1101 audit(1765858346.735:810): pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:26.742000 audit[5355]: CRED_ACQ pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:26.752409 kernel: audit: type=1103 audit(1765858346.742:811): pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:26.752537 kernel: audit: type=1006 audit(1765858346.742:812): pid=5355 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 04:12:26.742000 audit[5355]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcef8946a0 a2=3 a3=0 items=0 ppid=1 pid=5355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:26.742000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:26.762549 kernel: audit: type=1300 audit(1765858346.742:812): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcef8946a0 a2=3 a3=0 items=0 ppid=1 pid=5355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:26.762719 kernel: audit: type=1327 audit(1765858346.742:812): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:26.769278 systemd-logind[1613]: New session 19 of user core. Dec 16 04:12:26.778700 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 04:12:26.782000 audit[5355]: USER_START pid=5355 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:26.791315 kernel: audit: type=1105 audit(1765858346.782:813): pid=5355 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:26.791423 kernel: audit: type=1103 audit(1765858346.789:814): pid=5359 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:26.789000 audit[5359]: CRED_ACQ pid=5359 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:27.299812 sshd[5359]: Connection closed by 139.178.89.65 port 54456 Dec 16 04:12:27.302591 sshd-session[5355]: pam_unix(sshd:session): session closed for user core Dec 16 04:12:27.303000 audit[5355]: USER_END pid=5355 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:27.304000 audit[5355]: CRED_DISP pid=5355 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:27.312674 kernel: audit: type=1106 audit(1765858347.303:815): pid=5355 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:27.312785 kernel: audit: type=1104 audit(1765858347.304:816): pid=5355 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:27.312637 systemd[1]: sshd@16-10.230.69.46:22-139.178.89.65:54456.service: Deactivated successfully. Dec 16 04:12:27.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.69.46:22-139.178.89.65:54456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:27.318474 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 04:12:27.322634 systemd-logind[1613]: Session 19 logged out. Waiting for processes to exit. Dec 16 04:12:27.325082 systemd-logind[1613]: Removed session 19. Dec 16 04:12:29.494770 kubelet[2982]: E1216 04:12:29.494605 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" podUID="2b2fbc29-627a-4636-910d-2ada1caf4c64" Dec 16 04:12:30.493816 kubelet[2982]: E1216 04:12:30.493664 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s8rh7" podUID="bad388bf-fcef-4c56-88ec-bd97ca364c03" Dec 16 04:12:31.494987 kubelet[2982]: E1216 04:12:31.494755 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6665678475-rs6tq" podUID="698ea2f4-6c38-4f29-af10-d89d447f19d4" Dec 16 04:12:32.462369 systemd[1]: Started sshd@17-10.230.69.46:22-139.178.89.65:47618.service - OpenSSH per-connection server daemon (139.178.89.65:47618). Dec 16 04:12:32.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.69.46:22-139.178.89.65:47618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:32.468656 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 04:12:32.468771 kernel: audit: type=1130 audit(1765858352.461:818): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.69.46:22-139.178.89.65:47618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:33.307000 audit[5396]: USER_ACCT pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:33.311645 sshd[5396]: Accepted publickey for core from 139.178.89.65 port 47618 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:12:33.311000 audit[5396]: CRED_ACQ pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:33.314693 sshd-session[5396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:12:33.315821 kernel: audit: type=1101 audit(1765858353.307:819): pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:33.315932 kernel: audit: type=1103 audit(1765858353.311:820): pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:33.319974 kernel: audit: type=1006 audit(1765858353.311:821): pid=5396 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 16 04:12:33.311000 audit[5396]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb9cdc0e0 a2=3 a3=0 items=0 ppid=1 pid=5396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:33.323751 kernel: audit: type=1300 audit(1765858353.311:821): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb9cdc0e0 a2=3 a3=0 items=0 ppid=1 pid=5396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:33.311000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:33.328002 kernel: audit: type=1327 audit(1765858353.311:821): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:33.332789 systemd-logind[1613]: New session 20 of user core. Dec 16 04:12:33.342686 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 04:12:33.346000 audit[5396]: USER_START pid=5396 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:33.349000 audit[5402]: CRED_ACQ pid=5402 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:33.355577 kernel: audit: type=1105 audit(1765858353.346:822): pid=5396 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:33.355657 kernel: audit: type=1103 audit(1765858353.349:823): pid=5402 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:33.500496 kubelet[2982]: E1216 04:12:33.500310 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-756bfb6d7d-p4kdb" podUID="5542aa1b-8a7b-412d-a408-cf0a80b3e3bc" Dec 16 04:12:33.865318 sshd[5402]: Connection closed by 139.178.89.65 port 47618 Dec 16 04:12:33.866958 sshd-session[5396]: pam_unix(sshd:session): session closed for user core Dec 16 04:12:33.876026 kernel: audit: type=1106 audit(1765858353.867:824): pid=5396 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:33.867000 audit[5396]: USER_END pid=5396 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:33.872270 systemd[1]: sshd@17-10.230.69.46:22-139.178.89.65:47618.service: Deactivated successfully. Dec 16 04:12:33.875320 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 04:12:33.868000 audit[5396]: CRED_DISP pid=5396 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:33.882187 systemd-logind[1613]: Session 20 logged out. Waiting for processes to exit. Dec 16 04:12:33.882401 kernel: audit: type=1104 audit(1765858353.868:825): pid=5396 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:33.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.69.46:22-139.178.89.65:47618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:33.885113 systemd-logind[1613]: Removed session 20. Dec 16 04:12:35.496714 kubelet[2982]: E1216 04:12:35.496508 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-svthm" podUID="ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1" Dec 16 04:12:37.495859 kubelet[2982]: E1216 04:12:37.495786 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:12:39.028038 systemd[1]: Started sshd@18-10.230.69.46:22-139.178.89.65:47632.service - OpenSSH per-connection server daemon (139.178.89.65:47632). Dec 16 04:12:39.035353 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 04:12:39.035453 kernel: audit: type=1130 audit(1765858359.026:827): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.69.46:22-139.178.89.65:47632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:39.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.69.46:22-139.178.89.65:47632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:39.829000 audit[5414]: USER_ACCT pid=5414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:39.831870 sshd[5414]: Accepted publickey for core from 139.178.89.65 port 47632 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:12:39.835218 sshd-session[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:12:39.832000 audit[5414]: CRED_ACQ pid=5414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:39.837963 kernel: audit: type=1101 audit(1765858359.829:828): pid=5414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:39.838066 kernel: audit: type=1103 audit(1765858359.832:829): pid=5414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:39.842425 kernel: audit: type=1006 audit(1765858359.832:830): pid=5414 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 16 04:12:39.832000 audit[5414]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc77e8bd0 a2=3 a3=0 items=0 ppid=1 pid=5414 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:39.846046 kernel: audit: type=1300 audit(1765858359.832:830): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc77e8bd0 a2=3 a3=0 items=0 ppid=1 pid=5414 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:39.848974 systemd-logind[1613]: New session 21 of user core. Dec 16 04:12:39.851753 kernel: audit: type=1327 audit(1765858359.832:830): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:39.832000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:39.858661 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 04:12:39.861000 audit[5414]: USER_START pid=5414 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:39.869412 kernel: audit: type=1105 audit(1765858359.861:831): pid=5414 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:39.868000 audit[5422]: CRED_ACQ pid=5422 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:39.875463 kernel: audit: type=1103 audit(1765858359.868:832): pid=5422 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:40.379208 sshd[5422]: Connection closed by 139.178.89.65 port 47632 Dec 16 04:12:40.380211 sshd-session[5414]: pam_unix(sshd:session): session closed for user core Dec 16 04:12:40.381000 audit[5414]: USER_END pid=5414 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:40.392709 kernel: audit: type=1106 audit(1765858360.381:833): pid=5414 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:40.392853 kernel: audit: type=1104 audit(1765858360.385:834): pid=5414 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:40.385000 audit[5414]: CRED_DISP pid=5414 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:40.393883 systemd[1]: sshd@18-10.230.69.46:22-139.178.89.65:47632.service: Deactivated successfully. Dec 16 04:12:40.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.69.46:22-139.178.89.65:47632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:40.400605 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 04:12:40.402602 systemd-logind[1613]: Session 21 logged out. Waiting for processes to exit. Dec 16 04:12:40.406908 systemd-logind[1613]: Removed session 21. Dec 16 04:12:40.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.69.46:22-139.178.89.65:58490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:40.547565 systemd[1]: Started sshd@19-10.230.69.46:22-139.178.89.65:58490.service - OpenSSH per-connection server daemon (139.178.89.65:58490). Dec 16 04:12:41.346000 audit[5433]: USER_ACCT pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:41.349500 sshd[5433]: Accepted publickey for core from 139.178.89.65 port 58490 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:12:41.350000 audit[5433]: CRED_ACQ pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:41.350000 audit[5433]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc29d77600 a2=3 a3=0 items=0 ppid=1 pid=5433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:41.350000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:41.353326 sshd-session[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:12:41.361339 systemd-logind[1613]: New session 22 of user core. Dec 16 04:12:41.372653 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 04:12:41.376000 audit[5433]: USER_START pid=5433 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:41.380000 audit[5437]: CRED_ACQ pid=5437 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:41.494132 kubelet[2982]: E1216 04:12:41.493948 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" podUID="2b2fbc29-627a-4636-910d-2ada1caf4c64" Dec 16 04:12:42.421437 sshd[5437]: Connection closed by 139.178.89.65 port 58490 Dec 16 04:12:42.423316 sshd-session[5433]: pam_unix(sshd:session): session closed for user core Dec 16 04:12:42.434000 audit[5433]: USER_END pid=5433 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:42.434000 audit[5433]: CRED_DISP pid=5433 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:42.439427 systemd[1]: sshd@19-10.230.69.46:22-139.178.89.65:58490.service: Deactivated successfully. Dec 16 04:12:42.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.69.46:22-139.178.89.65:58490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:42.443367 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 04:12:42.445608 systemd-logind[1613]: Session 22 logged out. Waiting for processes to exit. Dec 16 04:12:42.447976 systemd-logind[1613]: Removed session 22. Dec 16 04:12:42.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.69.46:22-139.178.89.65:58502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:42.583012 systemd[1]: Started sshd@20-10.230.69.46:22-139.178.89.65:58502.service - OpenSSH per-connection server daemon (139.178.89.65:58502). Dec 16 04:12:43.417000 audit[5447]: USER_ACCT pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:43.419221 sshd[5447]: Accepted publickey for core from 139.178.89.65 port 58502 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:12:43.418000 audit[5447]: CRED_ACQ pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:43.418000 audit[5447]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc1d146d0 a2=3 a3=0 items=0 ppid=1 pid=5447 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:43.418000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:43.422459 sshd-session[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:12:43.430644 systemd-logind[1613]: New session 23 of user core. Dec 16 04:12:43.439609 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 04:12:43.444000 audit[5447]: USER_START pid=5447 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:43.447000 audit[5451]: CRED_ACQ pid=5451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:44.495159 kubelet[2982]: E1216 04:12:44.494959 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6665678475-rs6tq" podUID="698ea2f4-6c38-4f29-af10-d89d447f19d4" Dec 16 04:12:44.580000 audit[5461]: NETFILTER_CFG table=filter:146 family=2 entries=14 op=nft_register_rule pid=5461 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:12:44.587068 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 16 04:12:44.587257 kernel: audit: type=1325 audit(1765858364.580:851): table=filter:146 family=2 entries=14 op=nft_register_rule pid=5461 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:12:44.580000 audit[5461]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeed52c960 a2=0 a3=7ffeed52c94c items=0 ppid=3120 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:44.599497 kernel: audit: type=1300 audit(1765858364.580:851): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeed52c960 a2=0 a3=7ffeed52c94c items=0 ppid=3120 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:44.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:12:44.607402 kernel: audit: type=1327 audit(1765858364.580:851): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:12:44.608000 audit[5461]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5461 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:12:44.613417 kernel: audit: type=1325 audit(1765858364.608:852): table=nat:147 family=2 entries=20 op=nft_register_rule pid=5461 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:12:44.608000 audit[5461]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeed52c960 a2=0 a3=7ffeed52c94c items=0 ppid=3120 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:44.608000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:12:44.620644 kernel: audit: type=1300 audit(1765858364.608:852): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeed52c960 a2=0 a3=7ffeed52c94c items=0 ppid=3120 pid=5461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:44.620731 kernel: audit: type=1327 audit(1765858364.608:852): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:12:44.636000 audit[5463]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:12:44.641440 kernel: audit: type=1325 audit(1765858364.636:853): table=filter:148 family=2 entries=26 op=nft_register_rule pid=5463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:12:44.636000 audit[5463]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc68267090 a2=0 a3=7ffc6826707c items=0 ppid=3120 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:44.636000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:12:44.650125 kernel: audit: type=1300 audit(1765858364.636:853): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc68267090 a2=0 a3=7ffc6826707c items=0 ppid=3120 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:44.650202 kernel: audit: type=1327 audit(1765858364.636:853): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:12:44.641000 audit[5463]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:12:44.652868 kernel: audit: type=1325 audit(1765858364.641:854): table=nat:149 family=2 entries=20 op=nft_register_rule pid=5463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:12:44.641000 audit[5463]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc68267090 a2=0 a3=0 items=0 ppid=3120 pid=5463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:44.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:12:44.684438 sshd[5451]: Connection closed by 139.178.89.65 port 58502 Dec 16 04:12:44.686513 sshd-session[5447]: pam_unix(sshd:session): session closed for user core Dec 16 04:12:44.690000 audit[5447]: USER_END pid=5447 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:44.690000 audit[5447]: CRED_DISP pid=5447 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:44.696809 systemd[1]: sshd@20-10.230.69.46:22-139.178.89.65:58502.service: Deactivated successfully. Dec 16 04:12:44.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.69.46:22-139.178.89.65:58502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:44.700027 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 04:12:44.702890 systemd-logind[1613]: Session 23 logged out. Waiting for processes to exit. Dec 16 04:12:44.705538 systemd-logind[1613]: Removed session 23. Dec 16 04:12:44.839571 systemd[1]: Started sshd@21-10.230.69.46:22-139.178.89.65:58508.service - OpenSSH per-connection server daemon (139.178.89.65:58508). Dec 16 04:12:44.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.69.46:22-139.178.89.65:58508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:45.499166 kubelet[2982]: E1216 04:12:45.499115 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s8rh7" podUID="bad388bf-fcef-4c56-88ec-bd97ca364c03" Dec 16 04:12:45.695000 audit[5468]: USER_ACCT pid=5468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:45.698247 sshd[5468]: Accepted publickey for core from 139.178.89.65 port 58508 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:12:45.698000 audit[5468]: CRED_ACQ pid=5468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:45.698000 audit[5468]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca585cce0 a2=3 a3=0 items=0 ppid=1 pid=5468 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:45.698000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:45.703775 sshd-session[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:12:45.714450 systemd-logind[1613]: New session 24 of user core. Dec 16 04:12:45.725823 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 04:12:45.733000 audit[5468]: USER_START pid=5468 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:45.736000 audit[5472]: CRED_ACQ pid=5472 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:46.494935 kubelet[2982]: E1216 04:12:46.494693 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-svthm" podUID="ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1" Dec 16 04:12:46.496081 kubelet[2982]: E1216 04:12:46.495841 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-756bfb6d7d-p4kdb" podUID="5542aa1b-8a7b-412d-a408-cf0a80b3e3bc" Dec 16 04:12:46.966942 sshd[5472]: Connection closed by 139.178.89.65 port 58508 Dec 16 04:12:46.968403 sshd-session[5468]: pam_unix(sshd:session): session closed for user core Dec 16 04:12:46.973000 audit[5468]: USER_END pid=5468 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:46.981000 audit[5468]: CRED_DISP pid=5468 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:46.993349 systemd[1]: sshd@21-10.230.69.46:22-139.178.89.65:58508.service: Deactivated successfully. Dec 16 04:12:46.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.69.46:22-139.178.89.65:58508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:46.999963 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 04:12:47.007115 systemd-logind[1613]: Session 24 logged out. Waiting for processes to exit. Dec 16 04:12:47.009473 systemd-logind[1613]: Removed session 24. Dec 16 04:12:47.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.69.46:22-139.178.89.65:58510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:47.133545 systemd[1]: Started sshd@22-10.230.69.46:22-139.178.89.65:58510.service - OpenSSH per-connection server daemon (139.178.89.65:58510). Dec 16 04:12:47.961000 audit[5482]: USER_ACCT pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:47.963582 sshd[5482]: Accepted publickey for core from 139.178.89.65 port 58510 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:12:47.963000 audit[5482]: CRED_ACQ pid=5482 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:47.963000 audit[5482]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec37337f0 a2=3 a3=0 items=0 ppid=1 pid=5482 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:47.963000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:47.969119 sshd-session[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:12:47.976650 systemd-logind[1613]: New session 25 of user core. Dec 16 04:12:47.983683 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 04:12:47.990000 audit[5482]: USER_START pid=5482 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:47.994000 audit[5486]: CRED_ACQ pid=5486 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:48.506596 sshd[5486]: Connection closed by 139.178.89.65 port 58510 Dec 16 04:12:48.506483 sshd-session[5482]: pam_unix(sshd:session): session closed for user core Dec 16 04:12:48.507000 audit[5482]: USER_END pid=5482 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:48.507000 audit[5482]: CRED_DISP pid=5482 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:48.511852 systemd[1]: sshd@22-10.230.69.46:22-139.178.89.65:58510.service: Deactivated successfully. Dec 16 04:12:48.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.69.46:22-139.178.89.65:58510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:48.515700 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 04:12:48.519671 systemd-logind[1613]: Session 25 logged out. Waiting for processes to exit. Dec 16 04:12:48.521079 systemd-logind[1613]: Removed session 25. Dec 16 04:12:50.498972 kubelet[2982]: E1216 04:12:50.498897 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:12:52.492886 kubelet[2982]: E1216 04:12:52.492812 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" podUID="2b2fbc29-627a-4636-910d-2ada1caf4c64" Dec 16 04:12:53.745497 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 16 04:12:53.745785 kernel: audit: type=1130 audit(1765858373.730:876): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.69.46:22-139.178.89.65:41278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:53.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.69.46:22-139.178.89.65:41278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:53.731778 systemd[1]: Started sshd@23-10.230.69.46:22-139.178.89.65:41278.service - OpenSSH per-connection server daemon (139.178.89.65:41278). Dec 16 04:12:54.626000 audit[5503]: USER_ACCT pid=5503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:54.639448 kernel: audit: type=1101 audit(1765858374.626:877): pid=5503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:54.639687 sshd[5503]: Accepted publickey for core from 139.178.89.65 port 41278 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:12:54.634000 audit[5503]: CRED_ACQ pid=5503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:54.646406 kernel: audit: type=1103 audit(1765858374.634:878): pid=5503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:54.646560 kernel: audit: type=1006 audit(1765858374.635:879): pid=5503 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 04:12:54.649175 sshd-session[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:12:54.635000 audit[5503]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc120f700 a2=3 a3=0 items=0 ppid=1 pid=5503 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:54.656211 kernel: audit: type=1300 audit(1765858374.635:879): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc120f700 a2=3 a3=0 items=0 ppid=1 pid=5503 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:54.656334 kernel: audit: type=1327 audit(1765858374.635:879): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:54.635000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:12:54.666554 systemd-logind[1613]: New session 26 of user core. Dec 16 04:12:54.673812 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 04:12:54.685000 audit[5503]: USER_START pid=5503 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:54.693409 kernel: audit: type=1105 audit(1765858374.685:880): pid=5503 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:54.692000 audit[5507]: CRED_ACQ pid=5507 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:54.699437 kernel: audit: type=1103 audit(1765858374.692:881): pid=5507 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:55.371449 sshd[5507]: Connection closed by 139.178.89.65 port 41278 Dec 16 04:12:55.372443 sshd-session[5503]: pam_unix(sshd:session): session closed for user core Dec 16 04:12:55.374000 audit[5503]: USER_END pid=5503 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:55.390578 kernel: audit: type=1106 audit(1765858375.374:882): pid=5503 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:55.390701 kernel: audit: type=1104 audit(1765858375.374:883): pid=5503 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:55.374000 audit[5503]: CRED_DISP pid=5503 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:12:55.389246 systemd[1]: sshd@23-10.230.69.46:22-139.178.89.65:41278.service: Deactivated successfully. Dec 16 04:12:55.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.69.46:22-139.178.89.65:41278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:12:55.398845 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 04:12:55.402617 systemd-logind[1613]: Session 26 logged out. Waiting for processes to exit. Dec 16 04:12:55.405607 systemd-logind[1613]: Removed session 26. Dec 16 04:12:55.445000 audit[5519]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=5519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:12:55.445000 audit[5519]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc5d3f28c0 a2=0 a3=7ffc5d3f28ac items=0 ppid=3120 pid=5519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:55.445000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:12:55.453000 audit[5519]: NETFILTER_CFG table=nat:151 family=2 entries=104 op=nft_register_chain pid=5519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 04:12:55.453000 audit[5519]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc5d3f28c0 a2=0 a3=7ffc5d3f28ac items=0 ppid=3120 pid=5519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:12:55.453000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 04:12:56.493418 kubelet[2982]: E1216 04:12:56.493268 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6665678475-rs6tq" podUID="698ea2f4-6c38-4f29-af10-d89d447f19d4" Dec 16 04:12:59.496465 containerd[1631]: time="2025-12-16T04:12:59.496003250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 04:12:59.821889 containerd[1631]: time="2025-12-16T04:12:59.821376750Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:12:59.822691 containerd[1631]: time="2025-12-16T04:12:59.822640094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 04:12:59.822811 containerd[1631]: time="2025-12-16T04:12:59.822767823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 04:12:59.823583 kubelet[2982]: E1216 04:12:59.823490 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 04:12:59.824045 kubelet[2982]: E1216 04:12:59.823625 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 04:12:59.824115 kubelet[2982]: E1216 04:12:59.824052 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmrlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s8rh7_calico-system(bad388bf-fcef-4c56-88ec-bd97ca364c03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 04:12:59.824916 containerd[1631]: time="2025-12-16T04:12:59.824867065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 04:12:59.825529 kubelet[2982]: E1216 04:12:59.825470 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s8rh7" podUID="bad388bf-fcef-4c56-88ec-bd97ca364c03" Dec 16 04:13:00.155210 containerd[1631]: time="2025-12-16T04:13:00.154924674Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:13:00.156407 containerd[1631]: time="2025-12-16T04:13:00.156303812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 04:13:00.156602 containerd[1631]: time="2025-12-16T04:13:00.156546374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 04:13:00.156895 kubelet[2982]: E1216 04:13:00.156842 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 04:13:00.157409 kubelet[2982]: E1216 04:13:00.156912 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 04:13:00.157409 kubelet[2982]: E1216 04:13:00.157073 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:48980414ea8b4415945e777cac55846b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-df4q7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-756bfb6d7d-p4kdb_calico-system(5542aa1b-8a7b-412d-a408-cf0a80b3e3bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 04:13:00.160038 containerd[1631]: time="2025-12-16T04:13:00.160007897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 04:13:00.483269 containerd[1631]: time="2025-12-16T04:13:00.482251554Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:13:00.490013 containerd[1631]: time="2025-12-16T04:13:00.489804588Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 04:13:00.490013 containerd[1631]: time="2025-12-16T04:13:00.489955351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 04:13:00.490220 kubelet[2982]: E1216 04:13:00.490160 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 04:13:00.490302 kubelet[2982]: E1216 04:13:00.490224 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 04:13:00.490475 kubelet[2982]: E1216 04:13:00.490368 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-df4q7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-756bfb6d7d-p4kdb_calico-system(5542aa1b-8a7b-412d-a408-cf0a80b3e3bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 04:13:00.491760 kubelet[2982]: E1216 04:13:00.491654 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-756bfb6d7d-p4kdb" podUID="5542aa1b-8a7b-412d-a408-cf0a80b3e3bc" Dec 16 04:13:00.493570 kubelet[2982]: E1216 04:13:00.493471 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-svthm" podUID="ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1" Dec 16 04:13:00.562191 systemd[1]: Started sshd@24-10.230.69.46:22-139.178.89.65:36516.service - OpenSSH per-connection server daemon (139.178.89.65:36516). Dec 16 04:13:00.574857 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 04:13:00.574957 kernel: audit: type=1130 audit(1765858380.561:887): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.69.46:22-139.178.89.65:36516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:13:00.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.69.46:22-139.178.89.65:36516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:13:01.510000 audit[5549]: USER_ACCT pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:01.521963 sshd[5549]: Accepted publickey for core from 139.178.89.65 port 36516 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:13:01.522616 kernel: audit: type=1101 audit(1765858381.510:888): pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:01.519000 audit[5549]: CRED_ACQ pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:01.523750 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:13:01.529692 kernel: audit: type=1103 audit(1765858381.519:889): pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:01.535468 kernel: audit: type=1006 audit(1765858381.519:890): pid=5549 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 04:13:01.542514 kernel: audit: type=1300 audit(1765858381.519:890): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb99f1b50 a2=3 a3=0 items=0 ppid=1 pid=5549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:13:01.519000 audit[5549]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb99f1b50 a2=3 a3=0 items=0 ppid=1 pid=5549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:13:01.558020 kernel: audit: type=1327 audit(1765858381.519:890): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:13:01.519000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:13:01.556586 systemd-logind[1613]: New session 27 of user core. Dec 16 04:13:01.562643 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 04:13:01.571000 audit[5549]: USER_START pid=5549 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:01.579457 kernel: audit: type=1105 audit(1765858381.571:891): pid=5549 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:01.578000 audit[5553]: CRED_ACQ pid=5553 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:01.584461 kernel: audit: type=1103 audit(1765858381.578:892): pid=5553 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:02.221214 sshd[5553]: Connection closed by 139.178.89.65 port 36516 Dec 16 04:13:02.222731 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Dec 16 04:13:02.225000 audit[5549]: USER_END pid=5549 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:02.240264 kernel: audit: type=1106 audit(1765858382.225:893): pid=5549 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:02.241669 systemd[1]: sshd@24-10.230.69.46:22-139.178.89.65:36516.service: Deactivated successfully. Dec 16 04:13:02.249614 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 04:13:02.251599 systemd-logind[1613]: Session 27 logged out. Waiting for processes to exit. Dec 16 04:13:02.260262 kernel: audit: type=1104 audit(1765858382.225:894): pid=5549 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:02.225000 audit[5549]: CRED_DISP pid=5549 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:02.263833 systemd-logind[1613]: Removed session 27. Dec 16 04:13:02.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.69.46:22-139.178.89.65:36516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:13:02.502394 kubelet[2982]: E1216 04:13:02.502165 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cd92j" podUID="d6d2249c-912c-448c-8aa3-089c6b8243d1" Dec 16 04:13:05.500803 containerd[1631]: time="2025-12-16T04:13:05.500715154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 04:13:05.809865 containerd[1631]: time="2025-12-16T04:13:05.809688277Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:13:05.811627 containerd[1631]: time="2025-12-16T04:13:05.811560888Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 04:13:05.811720 containerd[1631]: time="2025-12-16T04:13:05.811584981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 04:13:05.812817 kubelet[2982]: E1216 04:13:05.812610 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:13:05.812817 kubelet[2982]: E1216 04:13:05.812738 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:13:05.814666 kubelet[2982]: E1216 04:13:05.813433 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdgpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5887594b4-5fm6l_calico-apiserver(2b2fbc29-627a-4636-910d-2ada1caf4c64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 04:13:05.815015 kubelet[2982]: E1216 04:13:05.814929 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-5fm6l" podUID="2b2fbc29-627a-4636-910d-2ada1caf4c64" Dec 16 04:13:07.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.69.46:22-139.178.89.65:36518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:13:07.378574 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 04:13:07.378816 kernel: audit: type=1130 audit(1765858387.368:896): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.69.46:22-139.178.89.65:36518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:13:07.369156 systemd[1]: Started sshd@25-10.230.69.46:22-139.178.89.65:36518.service - OpenSSH per-connection server daemon (139.178.89.65:36518). Dec 16 04:13:07.497012 containerd[1631]: time="2025-12-16T04:13:07.496862180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 04:13:07.825828 containerd[1631]: time="2025-12-16T04:13:07.825434777Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:13:07.826907 containerd[1631]: time="2025-12-16T04:13:07.826794863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 04:13:07.827126 containerd[1631]: time="2025-12-16T04:13:07.826853430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 04:13:07.827563 kubelet[2982]: E1216 04:13:07.827493 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 04:13:07.828065 kubelet[2982]: E1216 04:13:07.827576 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 04:13:07.828840 kubelet[2982]: E1216 04:13:07.828708 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ch2vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6665678475-rs6tq_calico-system(698ea2f4-6c38-4f29-af10-d89d447f19d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 04:13:07.830777 kubelet[2982]: E1216 04:13:07.830025 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6665678475-rs6tq" podUID="698ea2f4-6c38-4f29-af10-d89d447f19d4" Dec 16 04:13:08.225000 audit[5587]: USER_ACCT pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:08.229740 sshd[5587]: Accepted publickey for core from 139.178.89.65 port 36518 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 04:13:08.233429 kernel: audit: type=1101 audit(1765858388.225:897): pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:08.233000 audit[5587]: CRED_ACQ pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:08.240402 kernel: audit: type=1103 audit(1765858388.233:898): pid=5587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:08.240558 sshd-session[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 04:13:08.233000 audit[5587]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce9c159f0 a2=3 a3=0 items=0 ppid=1 pid=5587 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:13:08.253333 kernel: audit: type=1006 audit(1765858388.233:899): pid=5587 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Dec 16 04:13:08.253513 kernel: audit: type=1300 audit(1765858388.233:899): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce9c159f0 a2=3 a3=0 items=0 ppid=1 pid=5587 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 04:13:08.233000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:13:08.260420 kernel: audit: type=1327 audit(1765858388.233:899): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 04:13:08.266459 systemd-logind[1613]: New session 28 of user core. Dec 16 04:13:08.276786 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 04:13:08.284000 audit[5587]: USER_START pid=5587 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:08.293437 kernel: audit: type=1105 audit(1765858388.284:900): pid=5587 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:08.299000 audit[5591]: CRED_ACQ pid=5591 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:08.304417 kernel: audit: type=1103 audit(1765858388.299:901): pid=5591 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:08.920369 sshd[5591]: Connection closed by 139.178.89.65 port 36518 Dec 16 04:13:08.919604 sshd-session[5587]: pam_unix(sshd:session): session closed for user core Dec 16 04:13:08.921000 audit[5587]: USER_END pid=5587 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:08.929413 kernel: audit: type=1106 audit(1765858388.921:902): pid=5587 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:08.921000 audit[5587]: CRED_DISP pid=5587 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:08.935428 kernel: audit: type=1104 audit(1765858388.921:903): pid=5587 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 04:13:08.935479 systemd[1]: sshd@25-10.230.69.46:22-139.178.89.65:36518.service: Deactivated successfully. Dec 16 04:13:08.940111 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 04:13:08.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.69.46:22-139.178.89.65:36518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 04:13:08.942253 systemd-logind[1613]: Session 28 logged out. Waiting for processes to exit. Dec 16 04:13:08.946127 systemd-logind[1613]: Removed session 28. Dec 16 04:13:10.496476 kubelet[2982]: E1216 04:13:10.496421 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s8rh7" podUID="bad388bf-fcef-4c56-88ec-bd97ca364c03" Dec 16 04:13:12.494426 containerd[1631]: time="2025-12-16T04:13:12.494253824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 04:13:12.831349 containerd[1631]: time="2025-12-16T04:13:12.831295056Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 04:13:12.832363 containerd[1631]: time="2025-12-16T04:13:12.832321308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 04:13:12.832549 containerd[1631]: time="2025-12-16T04:13:12.832518939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 04:13:12.833347 kubelet[2982]: E1216 04:13:12.832793 2982 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:13:12.833347 kubelet[2982]: E1216 04:13:12.832910 2982 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 04:13:12.833347 kubelet[2982]: E1216 04:13:12.833122 2982 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqvvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5887594b4-svthm_calico-apiserver(ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 04:13:12.834931 kubelet[2982]: E1216 04:13:12.834857 2982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5887594b4-svthm" podUID="ff1b9f91-a1b4-4d4a-995d-2e8c1ad4d2e1"