Dec 12 19:30:52.099701 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 12 19:30:52.099741 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 19:30:52.099754 kernel: BIOS-provided physical RAM map: Dec 12 19:30:52.099762 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 12 19:30:52.099781 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 12 19:30:52.099789 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 19:30:52.099798 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 12 19:30:52.099809 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 12 19:30:52.099818 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 12 19:30:52.099826 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 12 19:30:52.099834 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 19:30:52.099842 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 19:30:52.099853 kernel: NX (Execute Disable) protection: active Dec 12 19:30:52.099861 kernel: APIC: Static calls initialized Dec 12 19:30:52.099871 kernel: SMBIOS 2.8 present. Dec 12 19:30:52.099881 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 12 19:30:52.099890 kernel: DMI: Memory slots populated: 1/1 Dec 12 19:30:52.099902 kernel: Hypervisor detected: KVM Dec 12 19:30:52.099911 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 12 19:30:52.099920 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 19:30:52.099929 kernel: kvm-clock: using sched offset of 4766906167 cycles Dec 12 19:30:52.099940 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 19:30:52.099950 kernel: tsc: Detected 2294.576 MHz processor Dec 12 19:30:52.099960 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 19:30:52.099970 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 19:30:52.099982 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 12 19:30:52.099992 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 19:30:52.100001 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 19:30:52.100011 kernel: Using GB pages for direct mapping Dec 12 19:30:52.100020 kernel: ACPI: Early table checksum verification disabled Dec 12 19:30:52.100030 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 12 19:30:52.100040 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 19:30:52.100049 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 19:30:52.100061 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 19:30:52.100071 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 12 19:30:52.100080 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 19:30:52.100090 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 19:30:52.100099 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 19:30:52.100109 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 19:30:52.100121 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 12 19:30:52.100135 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 12 19:30:52.100145 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 12 19:30:52.100155 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 12 19:30:52.100165 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 12 19:30:52.100182 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 12 19:30:52.100191 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 12 19:30:52.100201 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 12 19:30:52.100211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 12 19:30:52.100221 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 12 19:30:52.100231 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Dec 12 19:30:52.100241 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Dec 12 19:30:52.100254 kernel: Zone ranges: Dec 12 19:30:52.100264 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 19:30:52.100274 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 12 19:30:52.100284 kernel: Normal empty Dec 12 19:30:52.100294 kernel: Device empty Dec 12 19:30:52.100304 kernel: Movable zone start for each node Dec 12 19:30:52.100314 kernel: Early memory node ranges Dec 12 19:30:52.100323 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 19:30:52.100336 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 12 19:30:52.100346 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 12 19:30:52.100357 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 19:30:52.100366 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 19:30:52.100377 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 12 19:30:52.100386 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 19:30:52.100399 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 19:30:52.100413 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 19:30:52.100423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 19:30:52.100433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 19:30:52.100443 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 19:30:52.100453 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 19:30:52.100463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 19:30:52.100473 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 19:30:52.100486 kernel: TSC deadline timer available Dec 12 19:30:52.100503 kernel: CPU topo: Max. logical packages: 16 Dec 12 19:30:52.100513 kernel: CPU topo: Max. logical dies: 16 Dec 12 19:30:52.100523 kernel: CPU topo: Max. dies per package: 1 Dec 12 19:30:52.100533 kernel: CPU topo: Max. threads per core: 1 Dec 12 19:30:52.100543 kernel: CPU topo: Num. cores per package: 1 Dec 12 19:30:52.100553 kernel: CPU topo: Num. threads per package: 1 Dec 12 19:30:52.100563 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Dec 12 19:30:52.100576 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 19:30:52.100586 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 12 19:30:52.100596 kernel: Booting paravirtualized kernel on KVM Dec 12 19:30:52.100606 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 19:30:52.100616 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 12 19:30:52.100626 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Dec 12 19:30:52.100636 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Dec 12 19:30:52.102307 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 12 19:30:52.102331 kernel: kvm-guest: PV spinlocks enabled Dec 12 19:30:52.102343 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 19:30:52.102355 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 19:30:52.102366 kernel: random: crng init done Dec 12 19:30:52.102376 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 19:30:52.102386 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 19:30:52.102408 kernel: Fallback order for Node 0: 0 Dec 12 19:30:52.102418 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Dec 12 19:30:52.102428 kernel: Policy zone: DMA32 Dec 12 19:30:52.102438 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 19:30:52.102448 kernel: software IO TLB: area num 16. Dec 12 19:30:52.102458 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 12 19:30:52.102469 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 19:30:52.102484 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 19:30:52.102503 kernel: Dynamic Preempt: voluntary Dec 12 19:30:52.102513 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 19:30:52.102525 kernel: rcu: RCU event tracing is enabled. Dec 12 19:30:52.102535 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 12 19:30:52.102545 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 19:30:52.102556 kernel: Rude variant of Tasks RCU enabled. Dec 12 19:30:52.102571 kernel: Tracing variant of Tasks RCU enabled. Dec 12 19:30:52.102581 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 19:30:52.102591 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 12 19:30:52.102602 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 12 19:30:52.102613 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 12 19:30:52.102623 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 12 19:30:52.102633 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 12 19:30:52.102643 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 19:30:52.102669 kernel: Console: colour VGA+ 80x25 Dec 12 19:30:52.102695 kernel: printk: legacy console [tty0] enabled Dec 12 19:30:52.102711 kernel: printk: legacy console [ttyS0] enabled Dec 12 19:30:52.102722 kernel: ACPI: Core revision 20240827 Dec 12 19:30:52.102736 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 19:30:52.102746 kernel: x2apic enabled Dec 12 19:30:52.102757 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 19:30:52.102768 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Dec 12 19:30:52.102779 kernel: Calibrating delay loop (skipped) preset value.. 4589.15 BogoMIPS (lpj=2294576) Dec 12 19:30:52.102795 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 19:30:52.102806 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 12 19:30:52.102817 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 12 19:30:52.102827 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 19:30:52.102837 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Dec 12 19:30:52.102853 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 12 19:30:52.102863 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 12 19:30:52.102873 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 12 19:30:52.102883 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 19:30:52.102894 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 19:30:52.102904 kernel: TAA: Mitigation: Clear CPU buffers Dec 12 19:30:52.102914 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 19:30:52.102924 kernel: GDS: Unknown: Dependent on hypervisor status Dec 12 19:30:52.102933 kernel: active return thunk: its_return_thunk Dec 12 19:30:52.102949 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 12 19:30:52.102959 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 19:30:52.102970 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 19:30:52.102980 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 19:30:52.102990 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 12 19:30:52.103000 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 12 19:30:52.103010 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 12 19:30:52.103020 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 12 19:30:52.103030 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 19:30:52.103040 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 12 19:30:52.103050 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 12 19:30:52.103066 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 12 19:30:52.103076 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Dec 12 19:30:52.103086 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Dec 12 19:30:52.103097 kernel: Freeing SMP alternatives memory: 32K Dec 12 19:30:52.103106 kernel: pid_max: default: 32768 minimum: 301 Dec 12 19:30:52.103116 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 19:30:52.103127 kernel: landlock: Up and running. Dec 12 19:30:52.103137 kernel: SELinux: Initializing. Dec 12 19:30:52.103147 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 19:30:52.103157 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 19:30:52.103167 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Dec 12 19:30:52.103183 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 12 19:30:52.103194 kernel: signal: max sigframe size: 3632 Dec 12 19:30:52.103205 kernel: rcu: Hierarchical SRCU implementation. Dec 12 19:30:52.103216 kernel: rcu: Max phase no-delay instances is 400. Dec 12 19:30:52.103226 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Dec 12 19:30:52.103237 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 12 19:30:52.103248 kernel: smp: Bringing up secondary CPUs ... Dec 12 19:30:52.103264 kernel: smpboot: x86: Booting SMP configuration: Dec 12 19:30:52.103275 kernel: .... node #0, CPUs: #1 Dec 12 19:30:52.103285 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 19:30:52.103295 kernel: smpboot: Total of 2 processors activated (9178.30 BogoMIPS) Dec 12 19:30:52.103306 kernel: Memory: 1914124K/2096616K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 176500K reserved, 0K cma-reserved) Dec 12 19:30:52.103317 kernel: devtmpfs: initialized Dec 12 19:30:52.103328 kernel: x86/mm: Memory block size: 128MB Dec 12 19:30:52.103344 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 19:30:52.103356 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 12 19:30:52.103369 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 19:30:52.103380 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 19:30:52.103391 kernel: audit: initializing netlink subsys (disabled) Dec 12 19:30:52.103401 kernel: audit: type=2000 audit(1765567849.035:1): state=initialized audit_enabled=0 res=1 Dec 12 19:30:52.103412 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 19:30:52.103428 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 19:30:52.103439 kernel: cpuidle: using governor menu Dec 12 19:30:52.103449 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 19:30:52.103460 kernel: dca service started, version 1.12.1 Dec 12 19:30:52.103471 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 12 19:30:52.103482 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 12 19:30:52.104684 kernel: PCI: Using configuration type 1 for base access Dec 12 19:30:52.104709 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 19:30:52.104730 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 19:30:52.104741 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 19:30:52.104752 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 19:30:52.104763 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 19:30:52.104774 kernel: ACPI: Added _OSI(Module Device) Dec 12 19:30:52.104784 kernel: ACPI: Added _OSI(Processor Device) Dec 12 19:30:52.104795 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 19:30:52.104811 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 19:30:52.104822 kernel: ACPI: Interpreter enabled Dec 12 19:30:52.104833 kernel: ACPI: PM: (supports S0 S5) Dec 12 19:30:52.104844 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 19:30:52.104854 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 19:30:52.104865 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 19:30:52.104876 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 19:30:52.104893 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 19:30:52.105114 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 19:30:52.105251 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 19:30:52.105384 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 19:30:52.105398 kernel: PCI host bridge to bus 0000:00 Dec 12 19:30:52.105544 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 19:30:52.105709 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 19:30:52.105829 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 19:30:52.105945 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 12 19:30:52.106058 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 12 19:30:52.106172 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 12 19:30:52.106296 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 19:30:52.106448 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 19:30:52.106600 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Dec 12 19:30:52.107398 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Dec 12 19:30:52.107561 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Dec 12 19:30:52.110789 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Dec 12 19:30:52.110937 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 19:30:52.111084 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:30:52.111217 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Dec 12 19:30:52.111345 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 12 19:30:52.111474 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 12 19:30:52.111623 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 12 19:30:52.111871 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:30:52.113673 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Dec 12 19:30:52.113820 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 12 19:30:52.113954 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 12 19:30:52.114699 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 12 19:30:52.117165 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:30:52.117315 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Dec 12 19:30:52.117448 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 12 19:30:52.117592 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 12 19:30:52.118778 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 12 19:30:52.118934 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:30:52.119079 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Dec 12 19:30:52.119209 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 12 19:30:52.119339 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 12 19:30:52.119468 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 12 19:30:52.119614 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:30:52.119762 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Dec 12 19:30:52.119900 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 12 19:30:52.120057 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 12 19:30:52.120187 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 12 19:30:52.120322 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:30:52.120451 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Dec 12 19:30:52.120587 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 12 19:30:52.122793 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 12 19:30:52.122940 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 12 19:30:52.123081 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:30:52.123213 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Dec 12 19:30:52.123342 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 12 19:30:52.123471 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 12 19:30:52.123622 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 12 19:30:52.124923 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 19:30:52.125077 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Dec 12 19:30:52.125258 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 12 19:30:52.125537 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 12 19:30:52.125713 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 12 19:30:52.125853 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 19:30:52.125985 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Dec 12 19:30:52.126111 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Dec 12 19:30:52.126239 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Dec 12 19:30:52.127107 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Dec 12 19:30:52.127261 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 12 19:30:52.127391 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Dec 12 19:30:52.127531 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Dec 12 19:30:52.129817 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Dec 12 19:30:52.130005 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 19:30:52.130177 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 19:30:52.130322 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 19:30:52.130454 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Dec 12 19:30:52.130596 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Dec 12 19:30:52.130742 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 19:30:52.130871 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 12 19:30:52.131018 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Dec 12 19:30:52.131148 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Dec 12 19:30:52.131278 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 12 19:30:52.131407 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 12 19:30:52.131549 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 12 19:30:52.133515 kernel: pci_bus 0000:02: extended config space not accessible Dec 12 19:30:52.133718 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Dec 12 19:30:52.133866 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Dec 12 19:30:52.134002 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 12 19:30:52.134141 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 12 19:30:52.134323 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Dec 12 19:30:52.135448 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 12 19:30:52.137590 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 12 19:30:52.137755 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Dec 12 19:30:52.137895 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 12 19:30:52.138029 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 12 19:30:52.138180 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 12 19:30:52.138322 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 12 19:30:52.138452 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 12 19:30:52.138592 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 12 19:30:52.138607 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 19:30:52.138619 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 19:30:52.138638 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 19:30:52.138649 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 19:30:52.138673 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 19:30:52.138684 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 19:30:52.138695 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 19:30:52.138706 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 19:30:52.138717 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 19:30:52.138727 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 19:30:52.138745 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 19:30:52.138756 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 19:30:52.138771 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 19:30:52.138782 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 19:30:52.138793 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 19:30:52.138804 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 19:30:52.138815 kernel: iommu: Default domain type: Translated Dec 12 19:30:52.138831 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 19:30:52.138842 kernel: PCI: Using ACPI for IRQ routing Dec 12 19:30:52.138853 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 19:30:52.138864 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 12 19:30:52.138874 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 12 19:30:52.139003 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 19:30:52.139139 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 19:30:52.139263 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 19:30:52.139278 kernel: vgaarb: loaded Dec 12 19:30:52.139290 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 19:30:52.139300 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 19:30:52.139311 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 19:30:52.139322 kernel: pnp: PnP ACPI init Dec 12 19:30:52.139468 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 12 19:30:52.139484 kernel: pnp: PnP ACPI: found 5 devices Dec 12 19:30:52.139501 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 19:30:52.139513 kernel: NET: Registered PF_INET protocol family Dec 12 19:30:52.139523 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 19:30:52.139534 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 12 19:30:52.139545 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 19:30:52.139564 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 12 19:30:52.139575 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 12 19:30:52.139586 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 12 19:30:52.139597 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 19:30:52.139608 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 19:30:52.139618 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 19:30:52.139629 kernel: NET: Registered PF_XDP protocol family Dec 12 19:30:52.140588 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 12 19:30:52.140772 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 12 19:30:52.140904 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 12 19:30:52.141037 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 12 19:30:52.141167 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 12 19:30:52.141297 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 12 19:30:52.141440 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 12 19:30:52.141578 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 12 19:30:52.141718 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Dec 12 19:30:52.141881 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Dec 12 19:30:52.142017 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Dec 12 19:30:52.142144 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Dec 12 19:30:52.142270 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Dec 12 19:30:52.142405 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Dec 12 19:30:52.142538 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Dec 12 19:30:52.142673 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Dec 12 19:30:52.142812 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 12 19:30:52.143057 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 12 19:30:52.143190 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 12 19:30:52.143328 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 12 19:30:52.143455 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 12 19:30:52.143597 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 12 19:30:52.143759 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 12 19:30:52.143888 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 12 19:30:52.144024 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 12 19:30:52.144151 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 12 19:30:52.144279 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 12 19:30:52.144406 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 12 19:30:52.144542 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 12 19:30:52.144686 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 12 19:30:52.144817 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 12 19:30:52.144952 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 12 19:30:52.145077 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 12 19:30:52.145205 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 12 19:30:52.145331 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 12 19:30:52.145466 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 12 19:30:52.145599 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 12 19:30:52.145743 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 12 19:30:52.145876 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 12 19:30:52.146002 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 12 19:30:52.146128 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 12 19:30:52.146256 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 12 19:30:52.146383 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 12 19:30:52.146526 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 12 19:30:52.146672 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 12 19:30:52.146805 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 12 19:30:52.146934 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 12 19:30:52.147060 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 12 19:30:52.147186 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 12 19:30:52.147310 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 12 19:30:52.147441 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 19:30:52.147566 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 19:30:52.147700 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 19:30:52.147817 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 12 19:30:52.147931 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 12 19:30:52.148045 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 12 19:30:52.148178 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 12 19:30:52.148305 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 12 19:30:52.148438 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 12 19:30:52.148572 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 12 19:30:52.148709 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 12 19:30:52.148828 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 12 19:30:52.148950 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 12 19:30:52.149075 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 12 19:30:52.149197 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 12 19:30:52.149314 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 12 19:30:52.149439 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 12 19:30:52.149570 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 12 19:30:52.149706 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 12 19:30:52.149830 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 12 19:30:52.149949 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 12 19:30:52.150066 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 12 19:30:52.150193 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 12 19:30:52.150323 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 12 19:30:52.150442 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 12 19:30:52.150575 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 12 19:30:52.150751 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 12 19:30:52.150871 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 12 19:30:52.150994 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 12 19:30:52.151122 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 12 19:30:52.151240 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 12 19:30:52.151255 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 19:30:52.151268 kernel: PCI: CLS 0 bytes, default 64 Dec 12 19:30:52.151279 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 12 19:30:52.151291 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 12 19:30:52.151308 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 12 19:30:52.151320 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Dec 12 19:30:52.151332 kernel: Initialise system trusted keyrings Dec 12 19:30:52.151343 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 12 19:30:52.151355 kernel: Key type asymmetric registered Dec 12 19:30:52.151366 kernel: Asymmetric key parser 'x509' registered Dec 12 19:30:52.151377 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 19:30:52.151394 kernel: io scheduler mq-deadline registered Dec 12 19:30:52.151405 kernel: io scheduler kyber registered Dec 12 19:30:52.151417 kernel: io scheduler bfq registered Dec 12 19:30:52.151557 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 12 19:30:52.151707 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 12 19:30:52.151837 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:30:52.151973 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 12 19:30:52.152100 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 12 19:30:52.152226 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:30:52.152353 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 12 19:30:52.152481 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 12 19:30:52.152624 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:30:52.152764 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 12 19:30:52.152890 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 12 19:30:52.153017 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:30:52.153144 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 12 19:30:52.153272 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 12 19:30:52.153407 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:30:52.153543 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 12 19:30:52.153687 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 12 19:30:52.153814 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:30:52.153950 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 12 19:30:52.154076 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 12 19:30:52.154202 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:30:52.154330 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 12 19:30:52.154457 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 12 19:30:52.154596 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 19:30:52.154619 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 19:30:52.154632 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 19:30:52.154644 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 12 19:30:52.154665 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 19:30:52.154680 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 19:30:52.154692 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 19:30:52.154708 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 19:30:52.154720 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 19:30:52.154856 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 19:30:52.154872 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 19:30:52.154992 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 19:30:52.155114 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T19:30:50 UTC (1765567850) Dec 12 19:30:52.155241 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 12 19:30:52.155255 kernel: intel_pstate: CPU model not supported Dec 12 19:30:52.155268 kernel: NET: Registered PF_INET6 protocol family Dec 12 19:30:52.155280 kernel: Segment Routing with IPv6 Dec 12 19:30:52.155291 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 19:30:52.155302 kernel: NET: Registered PF_PACKET protocol family Dec 12 19:30:52.155314 kernel: Key type dns_resolver registered Dec 12 19:30:52.155325 kernel: IPI shorthand broadcast: enabled Dec 12 19:30:52.155343 kernel: sched_clock: Marking stable (1995002300, 119807241)->(2257257425, -142447884) Dec 12 19:30:52.155355 kernel: registered taskstats version 1 Dec 12 19:30:52.155366 kernel: Loading compiled-in X.509 certificates Dec 12 19:30:52.155378 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 12 19:30:52.155389 kernel: Demotion targets for Node 0: null Dec 12 19:30:52.155400 kernel: Key type .fscrypt registered Dec 12 19:30:52.155411 kernel: Key type fscrypt-provisioning registered Dec 12 19:30:52.155427 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 19:30:52.155439 kernel: ima: Allocated hash algorithm: sha1 Dec 12 19:30:52.155450 kernel: ima: No architecture policies found Dec 12 19:30:52.155462 kernel: clk: Disabling unused clocks Dec 12 19:30:52.155473 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 12 19:30:52.155485 kernel: Write protecting the kernel read-only data: 45056k Dec 12 19:30:52.155503 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 12 19:30:52.155520 kernel: Run /init as init process Dec 12 19:30:52.155532 kernel: with arguments: Dec 12 19:30:52.155543 kernel: /init Dec 12 19:30:52.155554 kernel: with environment: Dec 12 19:30:52.155565 kernel: HOME=/ Dec 12 19:30:52.155576 kernel: TERM=linux Dec 12 19:30:52.155588 kernel: ACPI: bus type USB registered Dec 12 19:30:52.155600 kernel: usbcore: registered new interface driver usbfs Dec 12 19:30:52.155617 kernel: usbcore: registered new interface driver hub Dec 12 19:30:52.155629 kernel: usbcore: registered new device driver usb Dec 12 19:30:52.155790 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 12 19:30:52.155922 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 12 19:30:52.156053 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 12 19:30:52.156184 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 12 19:30:52.156324 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 12 19:30:52.156456 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 12 19:30:52.156635 kernel: hub 1-0:1.0: USB hub found Dec 12 19:30:52.156793 kernel: hub 1-0:1.0: 4 ports detected Dec 12 19:30:52.156946 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 12 19:30:52.157106 kernel: hub 2-0:1.0: USB hub found Dec 12 19:30:52.157246 kernel: hub 2-0:1.0: 4 ports detected Dec 12 19:30:52.157261 kernel: SCSI subsystem initialized Dec 12 19:30:52.157273 kernel: libata version 3.00 loaded. Dec 12 19:30:52.157403 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 19:30:52.157418 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 19:30:52.157560 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 19:30:52.157706 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 19:30:52.157822 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 19:30:52.157954 kernel: scsi host0: ahci Dec 12 19:30:52.158077 kernel: scsi host1: ahci Dec 12 19:30:52.158198 kernel: scsi host2: ahci Dec 12 19:30:52.158327 kernel: scsi host3: ahci Dec 12 19:30:52.158477 kernel: scsi host4: ahci Dec 12 19:30:52.158619 kernel: scsi host5: ahci Dec 12 19:30:52.158634 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Dec 12 19:30:52.158646 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Dec 12 19:30:52.158676 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Dec 12 19:30:52.158691 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Dec 12 19:30:52.158703 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Dec 12 19:30:52.158714 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Dec 12 19:30:52.158868 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 12 19:30:52.158884 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 19:30:52.158896 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 19:30:52.158914 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 19:30:52.158925 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 12 19:30:52.158936 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 19:30:52.158948 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 19:30:52.158959 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 19:30:52.159099 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 12 19:30:52.159121 kernel: usbcore: registered new interface driver usbhid Dec 12 19:30:52.159245 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 12 19:30:52.159259 kernel: usbhid: USB HID core driver Dec 12 19:30:52.159272 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 19:30:52.159283 kernel: GPT:25804799 != 125829119 Dec 12 19:30:52.159295 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 19:30:52.159306 kernel: GPT:25804799 != 125829119 Dec 12 19:30:52.159323 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 12 19:30:52.159334 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 19:30:52.159345 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 19:30:52.159513 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 12 19:30:52.159529 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 19:30:52.159541 kernel: device-mapper: uevent: version 1.0.3 Dec 12 19:30:52.159559 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 19:30:52.159570 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 12 19:30:52.159582 kernel: raid6: avx512x4 gen() 17616 MB/s Dec 12 19:30:52.159593 kernel: raid6: avx512x2 gen() 17683 MB/s Dec 12 19:30:52.159604 kernel: raid6: avx512x1 gen() 17680 MB/s Dec 12 19:30:52.159616 kernel: raid6: avx2x4 gen() 17627 MB/s Dec 12 19:30:52.159628 kernel: raid6: avx2x2 gen() 17572 MB/s Dec 12 19:30:52.159645 kernel: raid6: avx2x1 gen() 13683 MB/s Dec 12 19:30:52.159674 kernel: raid6: using algorithm avx512x2 gen() 17683 MB/s Dec 12 19:30:52.159686 kernel: raid6: .... xor() 22064 MB/s, rmw enabled Dec 12 19:30:52.159698 kernel: raid6: using avx512x2 recovery algorithm Dec 12 19:30:52.159709 kernel: xor: automatically using best checksumming function avx Dec 12 19:30:52.159721 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 19:30:52.159732 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (195) Dec 12 19:30:52.159750 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 12 19:30:52.159762 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 19:30:52.159774 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 19:30:52.159785 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 19:30:52.159796 kernel: loop: module loaded Dec 12 19:30:52.159808 kernel: loop0: detected capacity change from 0 to 100136 Dec 12 19:30:52.159819 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 19:30:52.159838 systemd[1]: Successfully made /usr/ read-only. Dec 12 19:30:52.159854 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 19:30:52.159867 systemd[1]: Detected virtualization kvm. Dec 12 19:30:52.159878 systemd[1]: Detected architecture x86-64. Dec 12 19:30:52.159889 systemd[1]: Running in initrd. Dec 12 19:30:52.159901 systemd[1]: No hostname configured, using default hostname. Dec 12 19:30:52.159918 systemd[1]: Hostname set to . Dec 12 19:30:52.159930 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 19:30:52.159942 systemd[1]: Queued start job for default target initrd.target. Dec 12 19:30:52.159954 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 19:30:52.159966 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 19:30:52.159978 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 19:30:52.159995 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 19:30:52.160008 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 19:30:52.160021 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 19:30:52.160033 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 19:30:52.160045 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 19:30:52.160057 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 19:30:52.160074 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 19:30:52.160086 systemd[1]: Reached target paths.target - Path Units. Dec 12 19:30:52.160098 systemd[1]: Reached target slices.target - Slice Units. Dec 12 19:30:52.160110 systemd[1]: Reached target swap.target - Swaps. Dec 12 19:30:52.160121 systemd[1]: Reached target timers.target - Timer Units. Dec 12 19:30:52.160133 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 19:30:52.160145 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 19:30:52.160163 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 19:30:52.160175 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 19:30:52.160186 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 19:30:52.160199 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 19:30:52.160210 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 19:30:52.160222 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 19:30:52.160234 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 19:30:52.160252 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 19:30:52.160264 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 19:30:52.160276 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 19:30:52.160288 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 19:30:52.160300 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 19:30:52.160312 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 19:30:52.160330 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 19:30:52.160343 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 19:30:52.160355 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 19:30:52.160367 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 19:30:52.160385 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 19:30:52.160397 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 19:30:52.160441 systemd-journald[332]: Collecting audit messages is enabled. Dec 12 19:30:52.160475 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 19:30:52.160488 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 19:30:52.160511 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 19:30:52.160523 kernel: Bridge firewalling registered Dec 12 19:30:52.160536 kernel: audit: type=1130 audit(1765567852.138:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.160547 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 19:30:52.160566 systemd-journald[332]: Journal started Dec 12 19:30:52.160590 systemd-journald[332]: Runtime Journal (/run/log/journal/3819a32686db48c4a73e4dd406cfd4a8) is 4.7M, max 37.8M, 33M free. Dec 12 19:30:52.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.138217 systemd-modules-load[334]: Inserted module 'br_netfilter' Dec 12 19:30:52.183810 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 19:30:52.183843 kernel: audit: type=1130 audit(1765567852.179:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.188042 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 19:30:52.191941 kernel: audit: type=1130 audit(1765567852.184:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.191969 kernel: audit: type=1130 audit(1765567852.187:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.192382 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 19:30:52.193583 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 19:30:52.195816 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 19:30:52.198897 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 19:30:52.224346 systemd-tmpfiles[353]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 19:30:52.224882 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 19:30:52.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.231047 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 19:30:52.235988 kernel: audit: type=1130 audit(1765567852.227:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.236017 kernel: audit: type=1130 audit(1765567852.230:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.231853 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 19:30:52.240614 kernel: audit: type=1130 audit(1765567852.235:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.240665 kernel: audit: type=1334 audit(1765567852.237:9): prog-id=6 op=LOAD Dec 12 19:30:52.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.237000 audit: BPF prog-id=6 op=LOAD Dec 12 19:30:52.241031 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 19:30:52.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.241639 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 19:30:52.250536 kernel: audit: type=1130 audit(1765567852.241:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.249813 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 19:30:52.282933 dracut-cmdline[375]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 12 19:30:52.297958 systemd-resolved[370]: Positive Trust Anchors: Dec 12 19:30:52.297972 systemd-resolved[370]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 19:30:52.297976 systemd-resolved[370]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 19:30:52.298017 systemd-resolved[370]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 19:30:52.336891 systemd-resolved[370]: Defaulting to hostname 'linux'. Dec 12 19:30:52.338110 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 19:30:52.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.340120 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 19:30:52.401681 kernel: Loading iSCSI transport class v2.0-870. Dec 12 19:30:52.419724 kernel: iscsi: registered transport (tcp) Dec 12 19:30:52.448717 kernel: iscsi: registered transport (qla4xxx) Dec 12 19:30:52.448832 kernel: QLogic iSCSI HBA Driver Dec 12 19:30:52.483217 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 19:30:52.518829 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 19:30:52.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.521247 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 19:30:52.575860 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 19:30:52.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.578536 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 19:30:52.580240 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 19:30:52.622530 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 19:30:52.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.623000 audit: BPF prog-id=7 op=LOAD Dec 12 19:30:52.623000 audit: BPF prog-id=8 op=LOAD Dec 12 19:30:52.625338 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 19:30:52.660580 systemd-udevd[615]: Using default interface naming scheme 'v257'. Dec 12 19:30:52.674712 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 19:30:52.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.676772 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 19:30:52.707469 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 19:30:52.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.708598 dracut-pre-trigger[689]: rd.md=0: removing MD RAID activation Dec 12 19:30:52.708000 audit: BPF prog-id=9 op=LOAD Dec 12 19:30:52.710804 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 19:30:52.740626 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 19:30:52.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.744809 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 19:30:52.764488 systemd-networkd[725]: lo: Link UP Dec 12 19:30:52.764496 systemd-networkd[725]: lo: Gained carrier Dec 12 19:30:52.765575 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 19:30:52.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.767815 systemd[1]: Reached target network.target - Network. Dec 12 19:30:52.839206 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 19:30:52.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:52.844385 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 19:30:52.927140 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 19:30:52.956592 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 19:30:52.981299 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 19:30:52.992180 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 19:30:52.993943 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 19:30:53.012783 disk-uuid[780]: Primary Header is updated. Dec 12 19:30:53.012783 disk-uuid[780]: Secondary Entries is updated. Dec 12 19:30:53.012783 disk-uuid[780]: Secondary Header is updated. Dec 12 19:30:53.040683 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 12 19:30:53.087690 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 19:30:53.088533 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 19:30:53.088543 systemd-networkd[725]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 19:30:53.088961 systemd-networkd[725]: eth0: Link UP Dec 12 19:30:53.089599 systemd-networkd[725]: eth0: Gained carrier Dec 12 19:30:53.089612 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 19:30:53.093349 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 19:30:53.093843 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 19:30:53.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:53.095474 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 19:30:53.099087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 19:30:53.114760 systemd-networkd[725]: eth0: DHCPv4 address 10.244.101.34/30, gateway 10.244.101.33 acquired from 10.244.101.33 Dec 12 19:30:53.146686 kernel: AES CTR mode by8 optimization enabled Dec 12 19:30:53.223945 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 19:30:53.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:53.235540 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 19:30:53.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:53.236935 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 19:30:53.237395 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 19:30:53.238725 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 19:30:53.241002 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 19:30:53.268546 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 19:30:53.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.091247 disk-uuid[781]: Warning: The kernel is still using the old partition table. Dec 12 19:30:54.091247 disk-uuid[781]: The new table will be used at the next reboot or after you Dec 12 19:30:54.091247 disk-uuid[781]: run partprobe(8) or kpartx(8) Dec 12 19:30:54.091247 disk-uuid[781]: The operation has completed successfully. Dec 12 19:30:54.102899 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 19:30:54.103054 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 19:30:54.112085 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 12 19:30:54.112135 kernel: audit: type=1130 audit(1765567854.103:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.112153 kernel: audit: type=1131 audit(1765567854.103:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.105030 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 19:30:54.136685 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (869) Dec 12 19:30:54.136752 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 19:30:54.136770 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 19:30:54.141905 kernel: BTRFS info (device vda6): turning on async discard Dec 12 19:30:54.141971 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 19:30:54.150737 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 19:30:54.151488 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 19:30:54.155251 kernel: audit: type=1130 audit(1765567854.151:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.154810 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 19:30:54.363792 ignition[888]: Ignition 2.22.0 Dec 12 19:30:54.363808 ignition[888]: Stage: fetch-offline Dec 12 19:30:54.363850 ignition[888]: no configs at "/usr/lib/ignition/base.d" Dec 12 19:30:54.363861 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 19:30:54.363964 ignition[888]: parsed url from cmdline: "" Dec 12 19:30:54.363968 ignition[888]: no config URL provided Dec 12 19:30:54.363974 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 19:30:54.363982 ignition[888]: no config at "/usr/lib/ignition/user.ign" Dec 12 19:30:54.363988 ignition[888]: failed to fetch config: resource requires networking Dec 12 19:30:54.364247 ignition[888]: Ignition finished successfully Dec 12 19:30:54.374159 kernel: audit: type=1130 audit(1765567854.370:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.369425 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 19:30:54.374160 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 19:30:54.407963 ignition[894]: Ignition 2.22.0 Dec 12 19:30:54.408600 ignition[894]: Stage: fetch Dec 12 19:30:54.408798 ignition[894]: no configs at "/usr/lib/ignition/base.d" Dec 12 19:30:54.408808 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 19:30:54.408915 ignition[894]: parsed url from cmdline: "" Dec 12 19:30:54.408919 ignition[894]: no config URL provided Dec 12 19:30:54.408925 ignition[894]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 19:30:54.408932 ignition[894]: no config at "/usr/lib/ignition/user.ign" Dec 12 19:30:54.409070 ignition[894]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 12 19:30:54.409385 ignition[894]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 12 19:30:54.409424 ignition[894]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 12 19:30:54.428020 ignition[894]: GET result: OK Dec 12 19:30:54.429145 ignition[894]: parsing config with SHA512: a590bc70395b80215065cd265f5fe34c9aea60760d41073d4dd03eb035af2f3ea335dc1e48cbb1d49ead90f1864d40a876bac9f2fe7935edd57897187982e2a1 Dec 12 19:30:54.440824 unknown[894]: fetched base config from "system" Dec 12 19:30:54.440839 unknown[894]: fetched base config from "system" Dec 12 19:30:54.441391 ignition[894]: fetch: fetch complete Dec 12 19:30:54.440848 unknown[894]: fetched user config from "openstack" Dec 12 19:30:54.441399 ignition[894]: fetch: fetch passed Dec 12 19:30:54.441473 ignition[894]: Ignition finished successfully Dec 12 19:30:54.445160 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 19:30:54.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.449693 kernel: audit: type=1130 audit(1765567854.444:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.450097 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 19:30:54.482062 ignition[901]: Ignition 2.22.0 Dec 12 19:30:54.482077 ignition[901]: Stage: kargs Dec 12 19:30:54.482259 ignition[901]: no configs at "/usr/lib/ignition/base.d" Dec 12 19:30:54.482269 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 19:30:54.483259 ignition[901]: kargs: kargs passed Dec 12 19:30:54.483307 ignition[901]: Ignition finished successfully Dec 12 19:30:54.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.485588 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 19:30:54.488813 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 19:30:54.491773 kernel: audit: type=1130 audit(1765567854.485:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.527122 ignition[907]: Ignition 2.22.0 Dec 12 19:30:54.527138 ignition[907]: Stage: disks Dec 12 19:30:54.527303 ignition[907]: no configs at "/usr/lib/ignition/base.d" Dec 12 19:30:54.527312 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 19:30:54.528566 ignition[907]: disks: disks passed Dec 12 19:30:54.530459 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 19:30:54.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.528617 ignition[907]: Ignition finished successfully Dec 12 19:30:54.535211 kernel: audit: type=1130 audit(1765567854.530:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.531997 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 19:30:54.534887 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 19:30:54.535718 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 19:30:54.536550 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 19:30:54.537386 systemd[1]: Reached target basic.target - Basic System. Dec 12 19:30:54.539145 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 19:30:54.578936 systemd-fsck[915]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 12 19:30:54.582401 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 19:30:54.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.584148 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 19:30:54.587624 kernel: audit: type=1130 audit(1765567854.582:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.686445 systemd-networkd[725]: eth0: Gained IPv6LL Dec 12 19:30:54.709666 kernel: EXT4-fs (vda9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 12 19:30:54.711769 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 19:30:54.714544 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 19:30:54.717767 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 19:30:54.720759 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 19:30:54.721989 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 19:30:54.723833 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 12 19:30:54.725965 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 19:30:54.726753 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 19:30:54.734930 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 19:30:54.737800 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 19:30:54.744682 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (923) Dec 12 19:30:54.748153 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 19:30:54.749678 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 19:30:54.756723 kernel: BTRFS info (device vda6): turning on async discard Dec 12 19:30:54.756775 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 19:30:54.759438 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 19:30:54.814689 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:30:54.834205 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 19:30:54.840569 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Dec 12 19:30:54.846309 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 19:30:54.851012 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 19:30:54.962080 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 19:30:54.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.965764 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 19:30:54.970616 kernel: audit: type=1130 audit(1765567854.961:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:54.982830 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 19:30:54.992759 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 19:30:55.016928 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 19:30:55.025093 kernel: audit: type=1130 audit(1765567855.016:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:55.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:55.036683 ignition[1040]: INFO : Ignition 2.22.0 Dec 12 19:30:55.036683 ignition[1040]: INFO : Stage: mount Dec 12 19:30:55.036683 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 19:30:55.036683 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 19:30:55.039249 ignition[1040]: INFO : mount: mount passed Dec 12 19:30:55.039249 ignition[1040]: INFO : Ignition finished successfully Dec 12 19:30:55.040813 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 19:30:55.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:30:55.128229 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 19:30:55.222200 systemd-networkd[725]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:1948:24:19ff:fef4:6522/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:1948:24:19ff:fef4:6522/64 assigned by NDisc. Dec 12 19:30:55.222209 systemd-networkd[725]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 12 19:30:55.838730 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:30:57.848682 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:31:01.861685 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:31:01.868112 coreos-metadata[925]: Dec 12 19:31:01.867 WARN failed to locate config-drive, using the metadata service API instead Dec 12 19:31:01.888666 coreos-metadata[925]: Dec 12 19:31:01.888 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 12 19:31:01.900167 coreos-metadata[925]: Dec 12 19:31:01.900 INFO Fetch successful Dec 12 19:31:01.900931 coreos-metadata[925]: Dec 12 19:31:01.900 INFO wrote hostname srv-i3fa2.gb1.brightbox.com to /sysroot/etc/hostname Dec 12 19:31:01.903837 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 12 19:31:01.912358 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 19:31:01.912387 kernel: audit: type=1130 audit(1765567861.904:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:01.912404 kernel: audit: type=1131 audit(1765567861.904:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:01.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:01.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:01.903976 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 12 19:31:01.907753 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 19:31:01.933001 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 19:31:01.955702 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1056) Dec 12 19:31:01.958055 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 12 19:31:01.958125 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 19:31:01.963716 kernel: BTRFS info (device vda6): turning on async discard Dec 12 19:31:01.963827 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 19:31:01.968019 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 19:31:02.002529 ignition[1073]: INFO : Ignition 2.22.0 Dec 12 19:31:02.002529 ignition[1073]: INFO : Stage: files Dec 12 19:31:02.003884 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 19:31:02.003884 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 19:31:02.003884 ignition[1073]: DEBUG : files: compiled without relabeling support, skipping Dec 12 19:31:02.005779 ignition[1073]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 19:31:02.005779 ignition[1073]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 19:31:02.008376 ignition[1073]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 19:31:02.009100 ignition[1073]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 19:31:02.009939 unknown[1073]: wrote ssh authorized keys file for user: core Dec 12 19:31:02.010600 ignition[1073]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 19:31:02.011685 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 19:31:02.012439 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 12 19:31:02.226288 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 19:31:02.458983 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 19:31:02.458983 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 19:31:02.460820 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 19:31:02.460820 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 19:31:02.460820 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 19:31:02.460820 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 19:31:02.460820 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 19:31:02.460820 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 19:31:02.460820 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 19:31:02.460820 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 19:31:02.460820 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 19:31:02.460820 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 19:31:02.467787 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 19:31:02.467787 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 19:31:02.467787 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 12 19:31:03.185131 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 19:31:05.443296 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 19:31:05.443296 ignition[1073]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 19:31:05.448958 ignition[1073]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 19:31:05.448958 ignition[1073]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 19:31:05.448958 ignition[1073]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 19:31:05.448958 ignition[1073]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 19:31:05.448958 ignition[1073]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 19:31:05.448958 ignition[1073]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 19:31:05.448958 ignition[1073]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 19:31:05.448958 ignition[1073]: INFO : files: files passed Dec 12 19:31:05.448958 ignition[1073]: INFO : Ignition finished successfully Dec 12 19:31:05.461873 kernel: audit: type=1130 audit(1765567865.455:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.451231 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 19:31:05.461588 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 19:31:05.465850 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 19:31:05.481351 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 19:31:05.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.482137 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 19:31:05.486071 kernel: audit: type=1130 audit(1765567865.481:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.489721 kernel: audit: type=1131 audit(1765567865.482:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.498703 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 19:31:05.498703 initrd-setup-root-after-ignition[1105]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 19:31:05.503282 initrd-setup-root-after-ignition[1109]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 19:31:05.505509 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 19:31:05.513639 kernel: audit: type=1130 audit(1765567865.506:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.508083 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 19:31:05.515366 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 19:31:05.574876 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 19:31:05.575036 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 19:31:05.581902 kernel: audit: type=1130 audit(1765567865.575:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.581949 kernel: audit: type=1131 audit(1765567865.575:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.576219 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 19:31:05.582271 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 19:31:05.583435 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 19:31:05.584824 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 19:31:05.627406 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 19:31:05.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.631861 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 19:31:05.635498 kernel: audit: type=1130 audit(1765567865.627:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.660759 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 19:31:05.661090 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 19:31:05.663005 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 19:31:05.664038 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 19:31:05.665185 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 19:31:05.665360 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 19:31:05.667073 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 19:31:05.670871 kernel: audit: type=1131 audit(1765567865.666:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.670327 systemd[1]: Stopped target basic.target - Basic System. Dec 12 19:31:05.671292 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 19:31:05.672192 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 19:31:05.673278 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 19:31:05.674335 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 19:31:05.675260 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 19:31:05.676367 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 19:31:05.677437 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 19:31:05.678512 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 19:31:05.679629 systemd[1]: Stopped target swap.target - Swaps. Dec 12 19:31:05.680429 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 19:31:05.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.680641 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 19:31:05.681930 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 19:31:05.683496 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 19:31:05.685147 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 19:31:05.685602 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 19:31:05.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.687188 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 19:31:05.687548 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 19:31:05.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.689599 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 19:31:05.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.690027 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 19:31:05.691492 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 19:31:05.691745 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 19:31:05.694920 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 19:31:05.697856 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 19:31:05.698863 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 19:31:05.699553 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 19:31:05.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.702994 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 19:31:05.703644 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 19:31:05.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.704892 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 19:31:05.705004 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 19:31:05.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.711033 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 19:31:05.711647 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 19:31:05.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.734123 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 19:31:05.739727 ignition[1129]: INFO : Ignition 2.22.0 Dec 12 19:31:05.739727 ignition[1129]: INFO : Stage: umount Dec 12 19:31:05.739727 ignition[1129]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 19:31:05.739727 ignition[1129]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 19:31:05.744011 ignition[1129]: INFO : umount: umount passed Dec 12 19:31:05.744011 ignition[1129]: INFO : Ignition finished successfully Dec 12 19:31:05.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.743149 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 19:31:05.743337 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 19:31:05.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.744840 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 19:31:05.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.744955 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 19:31:05.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.746026 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 19:31:05.746138 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 19:31:05.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.747852 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 19:31:05.747999 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 19:31:05.749159 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 19:31:05.749264 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 19:31:05.750483 systemd[1]: Stopped target network.target - Network. Dec 12 19:31:05.751096 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 19:31:05.751154 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 19:31:05.751861 systemd[1]: Stopped target paths.target - Path Units. Dec 12 19:31:05.752499 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 19:31:05.755742 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 19:31:05.756557 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 19:31:05.757375 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 19:31:05.758294 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 19:31:05.758347 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 19:31:05.759314 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 19:31:05.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.759374 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 19:31:05.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.760315 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 19:31:05.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.760363 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 19:31:05.761253 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 19:31:05.761341 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 19:31:05.762240 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 19:31:05.762311 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 19:31:05.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.763279 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 19:31:05.763365 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 19:31:05.765230 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 19:31:05.775114 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 19:31:05.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.782017 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 19:31:05.782162 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 19:31:05.786306 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 19:31:05.787046 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 19:31:05.792358 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 19:31:05.790000 audit: BPF prog-id=6 op=UNLOAD Dec 12 19:31:05.791000 audit: BPF prog-id=9 op=UNLOAD Dec 12 19:31:05.793306 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 19:31:05.793381 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 19:31:05.795899 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 19:31:05.796918 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 19:31:05.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.796977 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 19:31:05.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.799988 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 19:31:05.800079 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 19:31:05.800923 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 19:31:05.800994 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 19:31:05.802789 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 19:31:05.813217 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 19:31:05.813857 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 19:31:05.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.815199 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 19:31:05.815815 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 19:31:05.816750 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 19:31:05.816785 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 19:31:05.818198 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 19:31:05.818645 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 19:31:05.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.819816 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 19:31:05.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.819868 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 19:31:05.821327 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 19:31:05.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.821375 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 19:31:05.824987 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 19:31:05.826625 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 19:31:05.827330 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 19:31:05.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.828828 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 19:31:05.828891 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 19:31:05.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.830871 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 19:31:05.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.830929 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 19:31:05.833265 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 19:31:05.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.833809 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 19:31:05.835127 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 19:31:05.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.835234 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 19:31:05.846503 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 19:31:05.851887 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 19:31:05.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.854166 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 19:31:05.854306 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 19:31:05.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:05.855885 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 19:31:05.857356 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 19:31:05.879133 systemd[1]: Switching root. Dec 12 19:31:05.913606 systemd-journald[332]: Journal stopped Dec 12 19:31:07.102389 systemd-journald[332]: Received SIGTERM from PID 1 (systemd). Dec 12 19:31:07.102569 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 19:31:07.102601 kernel: SELinux: policy capability open_perms=1 Dec 12 19:31:07.102616 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 19:31:07.102638 kernel: SELinux: policy capability always_check_network=0 Dec 12 19:31:07.104691 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 19:31:07.104728 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 19:31:07.104747 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 19:31:07.104772 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 19:31:07.104790 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 19:31:07.104811 systemd[1]: Successfully loaded SELinux policy in 69.988ms. Dec 12 19:31:07.104844 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.946ms. Dec 12 19:31:07.104865 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 19:31:07.104892 systemd[1]: Detected virtualization kvm. Dec 12 19:31:07.104907 systemd[1]: Detected architecture x86-64. Dec 12 19:31:07.104926 systemd[1]: Detected first boot. Dec 12 19:31:07.104952 systemd[1]: Hostname set to . Dec 12 19:31:07.104967 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 19:31:07.104983 zram_generator::config[1173]: No configuration found. Dec 12 19:31:07.105016 kernel: Guest personality initialized and is inactive Dec 12 19:31:07.105035 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 19:31:07.105056 kernel: Initialized host personality Dec 12 19:31:07.105071 kernel: NET: Registered PF_VSOCK protocol family Dec 12 19:31:07.105086 systemd[1]: Populated /etc with preset unit settings. Dec 12 19:31:07.105110 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 19:31:07.105125 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 19:31:07.105142 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 19:31:07.105175 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 19:31:07.105191 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 19:31:07.105207 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 19:31:07.105228 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 19:31:07.105246 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 19:31:07.105263 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 19:31:07.105282 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 19:31:07.105298 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 19:31:07.105313 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 19:31:07.105330 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 19:31:07.105346 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 19:31:07.105361 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 19:31:07.105376 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 19:31:07.105398 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 19:31:07.105418 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 19:31:07.105442 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 19:31:07.105457 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 19:31:07.105480 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 19:31:07.105503 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 19:31:07.105517 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 19:31:07.105532 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 19:31:07.105549 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 19:31:07.105564 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 19:31:07.105582 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 19:31:07.105602 systemd[1]: Reached target slices.target - Slice Units. Dec 12 19:31:07.105619 systemd[1]: Reached target swap.target - Swaps. Dec 12 19:31:07.105644 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 19:31:07.107712 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 19:31:07.107739 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 19:31:07.107755 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 19:31:07.107772 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 19:31:07.107789 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 19:31:07.107804 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 19:31:07.107830 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 19:31:07.107846 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 19:31:07.107861 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 19:31:07.107877 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 19:31:07.107896 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 19:31:07.107911 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 19:31:07.107927 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 19:31:07.107947 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:31:07.107962 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 19:31:07.107982 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 19:31:07.108001 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 19:31:07.108018 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 19:31:07.108033 systemd[1]: Reached target machines.target - Containers. Dec 12 19:31:07.108048 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 19:31:07.108069 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 19:31:07.108084 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 19:31:07.108101 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 19:31:07.108117 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 19:31:07.108132 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 19:31:07.108148 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 19:31:07.108168 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 19:31:07.108183 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 19:31:07.108202 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 19:31:07.108217 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 19:31:07.108236 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 19:31:07.108252 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 19:31:07.108272 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 19:31:07.108288 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 19:31:07.108305 kernel: kauditd_printk_skb: 58 callbacks suppressed Dec 12 19:31:07.108321 kernel: audit: type=1334 audit(1765567866.907:106): prog-id=15 op=LOAD Dec 12 19:31:07.108337 kernel: audit: type=1334 audit(1765567866.908:107): prog-id=16 op=LOAD Dec 12 19:31:07.108356 kernel: audit: type=1334 audit(1765567866.909:108): prog-id=17 op=LOAD Dec 12 19:31:07.108376 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 19:31:07.108391 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 19:31:07.108412 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 19:31:07.108437 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 19:31:07.108452 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 19:31:07.108473 kernel: fuse: init (API version 7.41) Dec 12 19:31:07.108488 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 19:31:07.108504 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:31:07.108520 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 19:31:07.108535 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 19:31:07.108554 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 19:31:07.108579 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 19:31:07.108594 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 19:31:07.108610 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 19:31:07.108625 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 19:31:07.108640 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 19:31:07.110690 kernel: audit: type=1130 audit(1765567866.985:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.110717 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 19:31:07.110737 kernel: ACPI: bus type drm_connector registered Dec 12 19:31:07.110756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 19:31:07.110772 kernel: audit: type=1130 audit(1765567866.996:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.110789 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 19:31:07.110804 kernel: audit: type=1131 audit(1765567866.996:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.110827 kernel: audit: type=1130 audit(1765567867.013:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.110843 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 19:31:07.110858 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 19:31:07.110879 kernel: audit: type=1131 audit(1765567867.013:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.110893 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 19:31:07.110911 kernel: audit: type=1130 audit(1765567867.022:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.110931 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 19:31:07.110948 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 19:31:07.110965 kernel: audit: type=1131 audit(1765567867.022:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.110980 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 19:31:07.110996 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 19:31:07.111012 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 19:31:07.111028 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 19:31:07.111052 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 19:31:07.111072 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 19:31:07.111087 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 19:31:07.111102 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 19:31:07.111117 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 19:31:07.111136 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 19:31:07.111152 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 19:31:07.111172 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 19:31:07.111189 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 19:31:07.111208 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 19:31:07.111223 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 19:31:07.111239 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 19:31:07.111254 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 19:31:07.111277 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 19:31:07.111296 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 19:31:07.111343 systemd-journald[1265]: Collecting audit messages is enabled. Dec 12 19:31:07.111381 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 19:31:07.111397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 19:31:07.111413 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 19:31:07.111448 systemd-journald[1265]: Journal started Dec 12 19:31:07.111483 systemd-journald[1265]: Runtime Journal (/run/log/journal/3819a32686db48c4a73e4dd406cfd4a8) is 4.7M, max 37.8M, 33M free. Dec 12 19:31:06.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:06.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:06.902000 audit: BPF prog-id=14 op=UNLOAD Dec 12 19:31:06.902000 audit: BPF prog-id=13 op=UNLOAD Dec 12 19:31:06.907000 audit: BPF prog-id=15 op=LOAD Dec 12 19:31:06.908000 audit: BPF prog-id=16 op=LOAD Dec 12 19:31:06.909000 audit: BPF prog-id=17 op=LOAD Dec 12 19:31:06.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:06.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:06.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.099000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 19:31:07.099000 audit[1265]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe40c77e00 a2=4000 a3=0 items=0 ppid=1 pid=1265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:07.099000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 19:31:06.695600 systemd[1]: Queued start job for default target multi-user.target. Dec 12 19:31:06.725165 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 19:31:06.726229 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 19:31:07.124701 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 19:31:07.124759 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 19:31:07.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.128136 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 19:31:07.129487 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 19:31:07.130218 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 19:31:07.148438 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 19:31:07.153110 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 19:31:07.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.153743 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 19:31:07.157038 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 19:31:07.165674 kernel: loop1: detected capacity change from 0 to 229808 Dec 12 19:31:07.173781 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 19:31:07.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.186532 systemd-journald[1265]: Time spent on flushing to /var/log/journal/3819a32686db48c4a73e4dd406cfd4a8 is 59.023ms for 1326 entries. Dec 12 19:31:07.186532 systemd-journald[1265]: System Journal (/var/log/journal/3819a32686db48c4a73e4dd406cfd4a8) is 8M, max 588.1M, 580.1M free. Dec 12 19:31:07.263815 systemd-journald[1265]: Received client request to flush runtime journal. Dec 12 19:31:07.263972 kernel: loop2: detected capacity change from 0 to 8 Dec 12 19:31:07.264002 kernel: loop3: detected capacity change from 0 to 119256 Dec 12 19:31:07.264025 kernel: loop4: detected capacity change from 0 to 111544 Dec 12 19:31:07.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.254000 audit: BPF prog-id=18 op=LOAD Dec 12 19:31:07.256000 audit: BPF prog-id=19 op=LOAD Dec 12 19:31:07.256000 audit: BPF prog-id=20 op=LOAD Dec 12 19:31:07.259000 audit: BPF prog-id=21 op=LOAD Dec 12 19:31:07.197762 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 19:31:07.199928 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Dec 12 19:31:07.199944 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Dec 12 19:31:07.211154 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 19:31:07.214566 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 19:31:07.230050 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 19:31:07.253842 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 19:31:07.258929 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 19:31:07.263822 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 19:31:07.266371 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 19:31:07.271130 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 19:31:07.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.289000 audit: BPF prog-id=22 op=LOAD Dec 12 19:31:07.289000 audit: BPF prog-id=23 op=LOAD Dec 12 19:31:07.289000 audit: BPF prog-id=24 op=LOAD Dec 12 19:31:07.293197 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 19:31:07.295000 audit: BPF prog-id=25 op=LOAD Dec 12 19:31:07.296000 audit: BPF prog-id=26 op=LOAD Dec 12 19:31:07.296000 audit: BPF prog-id=27 op=LOAD Dec 12 19:31:07.297674 kernel: loop5: detected capacity change from 0 to 229808 Dec 12 19:31:07.298920 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 19:31:07.300548 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Dec 12 19:31:07.300878 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Dec 12 19:31:07.306814 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 19:31:07.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.317133 kernel: loop6: detected capacity change from 0 to 8 Dec 12 19:31:07.321182 kernel: loop7: detected capacity change from 0 to 119256 Dec 12 19:31:07.334847 kernel: loop1: detected capacity change from 0 to 111544 Dec 12 19:31:07.347538 (sd-merge)[1340]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Dec 12 19:31:07.357975 (sd-merge)[1340]: Merged extensions into '/usr'. Dec 12 19:31:07.368984 systemd[1]: Reload requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 19:31:07.369003 systemd[1]: Reloading... Dec 12 19:31:07.373522 systemd-nsresourced[1339]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 19:31:07.473686 zram_generator::config[1380]: No configuration found. Dec 12 19:31:07.536042 systemd-oomd[1331]: No swap; memory pressure usage will be degraded Dec 12 19:31:07.562021 systemd-resolved[1333]: Positive Trust Anchors: Dec 12 19:31:07.562358 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 19:31:07.562367 systemd-resolved[1333]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 19:31:07.562419 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 19:31:07.584182 systemd-resolved[1333]: Using system hostname 'srv-i3fa2.gb1.brightbox.com'. Dec 12 19:31:07.756527 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 19:31:07.756854 systemd[1]: Reloading finished in 387 ms. Dec 12 19:31:07.772369 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 19:31:07.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.773065 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 19:31:07.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.773763 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 19:31:07.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.774312 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 19:31:07.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.775028 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 19:31:07.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:07.778445 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 19:31:07.788823 systemd[1]: Starting ensure-sysext.service... Dec 12 19:31:07.794857 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 19:31:07.795000 audit: BPF prog-id=28 op=LOAD Dec 12 19:31:07.795000 audit: BPF prog-id=18 op=UNLOAD Dec 12 19:31:07.795000 audit: BPF prog-id=29 op=LOAD Dec 12 19:31:07.795000 audit: BPF prog-id=30 op=LOAD Dec 12 19:31:07.795000 audit: BPF prog-id=19 op=UNLOAD Dec 12 19:31:07.795000 audit: BPF prog-id=20 op=UNLOAD Dec 12 19:31:07.796000 audit: BPF prog-id=31 op=LOAD Dec 12 19:31:07.796000 audit: BPF prog-id=25 op=UNLOAD Dec 12 19:31:07.796000 audit: BPF prog-id=32 op=LOAD Dec 12 19:31:07.796000 audit: BPF prog-id=33 op=LOAD Dec 12 19:31:07.796000 audit: BPF prog-id=26 op=UNLOAD Dec 12 19:31:07.796000 audit: BPF prog-id=27 op=UNLOAD Dec 12 19:31:07.797000 audit: BPF prog-id=34 op=LOAD Dec 12 19:31:07.797000 audit: BPF prog-id=22 op=UNLOAD Dec 12 19:31:07.797000 audit: BPF prog-id=35 op=LOAD Dec 12 19:31:07.797000 audit: BPF prog-id=36 op=LOAD Dec 12 19:31:07.798000 audit: BPF prog-id=23 op=UNLOAD Dec 12 19:31:07.798000 audit: BPF prog-id=24 op=UNLOAD Dec 12 19:31:07.800000 audit: BPF prog-id=37 op=LOAD Dec 12 19:31:07.800000 audit: BPF prog-id=21 op=UNLOAD Dec 12 19:31:07.801000 audit: BPF prog-id=38 op=LOAD Dec 12 19:31:07.801000 audit: BPF prog-id=15 op=UNLOAD Dec 12 19:31:07.801000 audit: BPF prog-id=39 op=LOAD Dec 12 19:31:07.801000 audit: BPF prog-id=40 op=LOAD Dec 12 19:31:07.803000 audit: BPF prog-id=16 op=UNLOAD Dec 12 19:31:07.803000 audit: BPF prog-id=17 op=UNLOAD Dec 12 19:31:07.826503 systemd[1]: Reload requested from client PID 1439 ('systemctl') (unit ensure-sysext.service)... Dec 12 19:31:07.826522 systemd[1]: Reloading... Dec 12 19:31:07.829245 systemd-tmpfiles[1440]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 19:31:07.829277 systemd-tmpfiles[1440]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 19:31:07.829579 systemd-tmpfiles[1440]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 19:31:07.833004 systemd-tmpfiles[1440]: ACLs are not supported, ignoring. Dec 12 19:31:07.833072 systemd-tmpfiles[1440]: ACLs are not supported, ignoring. Dec 12 19:31:07.843367 systemd-tmpfiles[1440]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 19:31:07.843390 systemd-tmpfiles[1440]: Skipping /boot Dec 12 19:31:07.864817 systemd-tmpfiles[1440]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 19:31:07.864830 systemd-tmpfiles[1440]: Skipping /boot Dec 12 19:31:07.929413 zram_generator::config[1472]: No configuration found. Dec 12 19:31:08.164438 systemd[1]: Reloading finished in 337 ms. Dec 12 19:31:08.177176 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 19:31:08.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.181000 audit: BPF prog-id=41 op=LOAD Dec 12 19:31:08.181000 audit: BPF prog-id=31 op=UNLOAD Dec 12 19:31:08.182000 audit: BPF prog-id=42 op=LOAD Dec 12 19:31:08.182000 audit: BPF prog-id=43 op=LOAD Dec 12 19:31:08.182000 audit: BPF prog-id=32 op=UNLOAD Dec 12 19:31:08.182000 audit: BPF prog-id=33 op=UNLOAD Dec 12 19:31:08.186000 audit: BPF prog-id=44 op=LOAD Dec 12 19:31:08.190000 audit: BPF prog-id=38 op=UNLOAD Dec 12 19:31:08.190000 audit: BPF prog-id=45 op=LOAD Dec 12 19:31:08.190000 audit: BPF prog-id=46 op=LOAD Dec 12 19:31:08.190000 audit: BPF prog-id=39 op=UNLOAD Dec 12 19:31:08.190000 audit: BPF prog-id=40 op=UNLOAD Dec 12 19:31:08.192000 audit: BPF prog-id=47 op=LOAD Dec 12 19:31:08.192000 audit: BPF prog-id=34 op=UNLOAD Dec 12 19:31:08.192000 audit: BPF prog-id=48 op=LOAD Dec 12 19:31:08.193000 audit: BPF prog-id=49 op=LOAD Dec 12 19:31:08.193000 audit: BPF prog-id=35 op=UNLOAD Dec 12 19:31:08.193000 audit: BPF prog-id=36 op=UNLOAD Dec 12 19:31:08.193000 audit: BPF prog-id=50 op=LOAD Dec 12 19:31:08.194000 audit: BPF prog-id=28 op=UNLOAD Dec 12 19:31:08.194000 audit: BPF prog-id=51 op=LOAD Dec 12 19:31:08.194000 audit: BPF prog-id=52 op=LOAD Dec 12 19:31:08.194000 audit: BPF prog-id=29 op=UNLOAD Dec 12 19:31:08.194000 audit: BPF prog-id=30 op=UNLOAD Dec 12 19:31:08.195000 audit: BPF prog-id=53 op=LOAD Dec 12 19:31:08.195000 audit: BPF prog-id=37 op=UNLOAD Dec 12 19:31:08.198704 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 19:31:08.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.208489 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 19:31:08.212929 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 19:31:08.219924 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 19:31:08.222000 audit: BPF prog-id=8 op=UNLOAD Dec 12 19:31:08.222000 audit: BPF prog-id=7 op=UNLOAD Dec 12 19:31:08.222466 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 19:31:08.223000 audit: BPF prog-id=54 op=LOAD Dec 12 19:31:08.223000 audit: BPF prog-id=55 op=LOAD Dec 12 19:31:08.226059 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 19:31:08.230179 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 19:31:08.233158 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:31:08.233381 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 19:31:08.237092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 19:31:08.249458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 19:31:08.256037 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 19:31:08.260882 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 19:31:08.261126 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 19:31:08.261245 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 19:31:08.261366 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:31:08.265272 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:31:08.265487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 19:31:08.266757 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 19:31:08.266959 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 19:31:08.267051 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 19:31:08.267145 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:31:08.271477 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:31:08.271751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 19:31:08.271000 audit[1536]: SYSTEM_BOOT pid=1536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.274509 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 19:31:08.275099 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 19:31:08.275292 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 19:31:08.275413 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 19:31:08.275550 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 19:31:08.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.281156 systemd[1]: Finished ensure-sysext.service. Dec 12 19:31:08.284000 audit: BPF prog-id=56 op=LOAD Dec 12 19:31:08.286743 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 19:31:08.288048 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 19:31:08.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.347013 systemd-udevd[1535]: Using default interface naming scheme 'v257'. Dec 12 19:31:08.348891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 19:31:08.349197 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 19:31:08.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.362268 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 19:31:08.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.383120 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 19:31:08.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.383844 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 19:31:08.385332 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 19:31:08.386016 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 19:31:08.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.387137 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 19:31:08.387869 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 19:31:08.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.388918 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 19:31:08.389180 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 19:31:08.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.392626 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 19:31:08.392834 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 19:31:08.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:08.399766 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 19:31:08.402000 audit: BPF prog-id=57 op=LOAD Dec 12 19:31:08.403866 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 19:31:08.415000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 19:31:08.415000 audit[1576]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc64dba0d0 a2=420 a3=0 items=0 ppid=1530 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:08.415000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 19:31:08.416498 augenrules[1576]: No rules Dec 12 19:31:08.420002 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 19:31:08.420281 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 19:31:08.496421 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 19:31:08.497835 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 19:31:08.566507 systemd-networkd[1573]: lo: Link UP Dec 12 19:31:08.566517 systemd-networkd[1573]: lo: Gained carrier Dec 12 19:31:08.569544 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 19:31:08.570124 systemd[1]: Reached target network.target - Network. Dec 12 19:31:08.572804 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 19:31:08.577232 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 19:31:08.584617 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 19:31:08.624736 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 19:31:08.647675 systemd-networkd[1573]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 19:31:08.648521 systemd-networkd[1573]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 19:31:08.654549 systemd-networkd[1573]: eth0: Link UP Dec 12 19:31:08.655843 systemd-networkd[1573]: eth0: Gained carrier Dec 12 19:31:08.655907 systemd-networkd[1573]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 19:31:08.717640 systemd-networkd[1573]: eth0: DHCPv4 address 10.244.101.34/30, gateway 10.244.101.33 acquired from 10.244.101.33 Dec 12 19:31:08.718488 systemd-timesyncd[1548]: Network configuration changed, trying to establish connection. Dec 12 19:31:08.746687 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 12 19:31:08.750629 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 19:31:08.777720 kernel: ACPI: button: Power Button [PWRF] Dec 12 19:31:08.782347 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 19:31:08.787710 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 19:31:08.850430 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 19:31:08.897702 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 19:31:08.899706 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 19:31:08.949674 ldconfig[1532]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 19:31:08.952731 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 19:31:08.955682 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 19:31:08.979519 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 19:31:08.980997 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 19:31:08.981566 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 19:31:08.982035 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 19:31:08.982475 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 19:31:08.983073 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 19:31:08.983555 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 19:31:08.984020 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 19:31:08.984499 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 19:31:08.985087 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 19:31:08.985509 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 19:31:08.985539 systemd[1]: Reached target paths.target - Path Units. Dec 12 19:31:08.985895 systemd[1]: Reached target timers.target - Timer Units. Dec 12 19:31:08.987385 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 19:31:08.989931 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 19:31:08.993380 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 19:31:08.994080 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 19:31:08.994550 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 19:31:09.000420 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 19:31:09.001219 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 19:31:09.002460 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 19:31:09.007260 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 19:31:09.007677 systemd[1]: Reached target basic.target - Basic System. Dec 12 19:31:09.008088 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 19:31:09.008119 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 19:31:09.010571 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 19:31:09.013901 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 19:31:09.017134 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 19:31:09.022897 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 19:31:09.031870 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 19:31:09.036911 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 19:31:09.037724 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 19:31:09.049931 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 19:31:09.054734 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:31:09.054696 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 19:31:09.057860 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 19:31:09.059421 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 19:31:09.060920 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 19:31:09.072937 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 19:31:09.073409 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 19:31:09.074347 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 19:31:09.078826 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 19:31:09.086008 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 19:31:09.090890 jq[1633]: false Dec 12 19:31:09.104192 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 19:31:09.105210 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 19:31:09.106714 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 19:31:09.116035 google_oslogin_nss_cache[1635]: oslogin_cache_refresh[1635]: Refreshing passwd entry cache Dec 12 19:31:09.112848 oslogin_cache_refresh[1635]: Refreshing passwd entry cache Dec 12 19:31:09.144840 google_oslogin_nss_cache[1635]: oslogin_cache_refresh[1635]: Failure getting users, quitting Dec 12 19:31:09.144840 google_oslogin_nss_cache[1635]: oslogin_cache_refresh[1635]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 19:31:09.144840 google_oslogin_nss_cache[1635]: oslogin_cache_refresh[1635]: Refreshing group entry cache Dec 12 19:31:09.144840 google_oslogin_nss_cache[1635]: oslogin_cache_refresh[1635]: Failure getting groups, quitting Dec 12 19:31:09.144840 google_oslogin_nss_cache[1635]: oslogin_cache_refresh[1635]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 19:31:09.144573 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 19:31:09.139876 oslogin_cache_refresh[1635]: Failure getting users, quitting Dec 12 19:31:09.139896 oslogin_cache_refresh[1635]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 19:31:09.139952 oslogin_cache_refresh[1635]: Refreshing group entry cache Dec 12 19:31:09.142775 oslogin_cache_refresh[1635]: Failure getting groups, quitting Dec 12 19:31:09.142787 oslogin_cache_refresh[1635]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 19:31:09.147565 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 19:31:09.159360 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 19:31:09.159691 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 19:31:09.161644 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 19:31:09.163631 jq[1644]: true Dec 12 19:31:09.163162 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 19:31:09.167504 extend-filesystems[1634]: Found /dev/vda6 Dec 12 19:31:09.191953 extend-filesystems[1634]: Found /dev/vda9 Dec 12 19:31:09.201004 extend-filesystems[1634]: Checking size of /dev/vda9 Dec 12 19:31:09.218920 jq[1667]: true Dec 12 19:31:09.219143 update_engine[1643]: I20251212 19:31:09.218358 1643 main.cc:92] Flatcar Update Engine starting Dec 12 19:31:09.223835 dbus-daemon[1631]: [system] SELinux support is enabled Dec 12 19:31:09.224115 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 19:31:09.229449 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 19:31:09.230083 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 19:31:09.232853 tar[1648]: linux-amd64/LICENSE Dec 12 19:31:09.232853 tar[1648]: linux-amd64/helm Dec 12 19:31:09.230917 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 19:31:09.230935 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 19:31:09.242782 extend-filesystems[1634]: Resized partition /dev/vda9 Dec 12 19:31:09.253153 dbus-daemon[1631]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.4' (uid=244 pid=1573 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 19:31:09.253496 extend-filesystems[1680]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 19:31:09.263582 update_engine[1643]: I20251212 19:31:09.263075 1643 update_check_scheduler.cc:74] Next update check in 2m5s Dec 12 19:31:09.289693 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Dec 12 19:31:09.306211 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 19:31:09.308498 systemd[1]: Started update-engine.service - Update Engine. Dec 12 19:31:09.314187 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 19:31:09.319245 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 19:31:09.461365 bash[1701]: Updated "/home/core/.ssh/authorized_keys" Dec 12 19:31:09.463317 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 19:31:09.472532 systemd[1]: Starting sshkeys.service... Dec 12 19:31:09.491194 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Dec 12 19:31:09.504643 extend-filesystems[1680]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 19:31:09.504643 extend-filesystems[1680]: old_desc_blocks = 1, new_desc_blocks = 7 Dec 12 19:31:09.504643 extend-filesystems[1680]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Dec 12 19:31:09.510462 extend-filesystems[1634]: Resized filesystem in /dev/vda9 Dec 12 19:31:09.507191 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 19:31:09.508722 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 19:31:09.514745 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 19:31:09.517215 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 19:31:09.576024 sshd_keygen[1663]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 19:31:09.582354 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:31:09.662983 systemd-networkd[1573]: eth0: Gained IPv6LL Dec 12 19:31:09.665265 systemd-timesyncd[1548]: Network configuration changed, trying to establish connection. Dec 12 19:31:09.668129 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 19:31:09.670596 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 19:31:09.678062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:31:09.682504 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 19:31:09.687745 containerd[1670]: time="2025-12-12T19:31:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 19:31:09.699338 containerd[1670]: time="2025-12-12T19:31:09.699272932Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 19:31:09.726135 locksmithd[1693]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 19:31:09.737427 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 19:31:09.771490 containerd[1670]: time="2025-12-12T19:31:09.770233524Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.92µs" Dec 12 19:31:09.771490 containerd[1670]: time="2025-12-12T19:31:09.770282288Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 19:31:09.771490 containerd[1670]: time="2025-12-12T19:31:09.770332222Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 19:31:09.771490 containerd[1670]: time="2025-12-12T19:31:09.770345820Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 19:31:09.771490 containerd[1670]: time="2025-12-12T19:31:09.770491629Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 19:31:09.771490 containerd[1670]: time="2025-12-12T19:31:09.770507041Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 19:31:09.771490 containerd[1670]: time="2025-12-12T19:31:09.770560701Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 19:31:09.771490 containerd[1670]: time="2025-12-12T19:31:09.770571902Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 19:31:09.771490 containerd[1670]: time="2025-12-12T19:31:09.770860043Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 19:31:09.771490 containerd[1670]: time="2025-12-12T19:31:09.770880112Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 19:31:09.771490 containerd[1670]: time="2025-12-12T19:31:09.770894783Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 19:31:09.771490 containerd[1670]: time="2025-12-12T19:31:09.770904623Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 19:31:09.771858 containerd[1670]: time="2025-12-12T19:31:09.771064329Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 19:31:09.771858 containerd[1670]: time="2025-12-12T19:31:09.771088384Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 19:31:09.771858 containerd[1670]: time="2025-12-12T19:31:09.771163275Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 19:31:09.771858 containerd[1670]: time="2025-12-12T19:31:09.771360713Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 19:31:09.771858 containerd[1670]: time="2025-12-12T19:31:09.771388969Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 19:31:09.771858 containerd[1670]: time="2025-12-12T19:31:09.771400320Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 19:31:09.772041 containerd[1670]: time="2025-12-12T19:31:09.772023700Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 19:31:09.772475 containerd[1670]: time="2025-12-12T19:31:09.772445537Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 19:31:09.772608 containerd[1670]: time="2025-12-12T19:31:09.772594916Z" level=info msg="metadata content store policy set" policy=shared Dec 12 19:31:09.785593 containerd[1670]: time="2025-12-12T19:31:09.783735827Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 19:31:09.785593 containerd[1670]: time="2025-12-12T19:31:09.783895927Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 19:31:09.785593 containerd[1670]: time="2025-12-12T19:31:09.783997950Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 19:31:09.785593 containerd[1670]: time="2025-12-12T19:31:09.784013885Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 19:31:09.785593 containerd[1670]: time="2025-12-12T19:31:09.784027507Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 19:31:09.785593 containerd[1670]: time="2025-12-12T19:31:09.784039637Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 19:31:09.785593 containerd[1670]: time="2025-12-12T19:31:09.784053842Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 19:31:09.785593 containerd[1670]: time="2025-12-12T19:31:09.784064395Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 19:31:09.785593 containerd[1670]: time="2025-12-12T19:31:09.784075629Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 19:31:09.785593 containerd[1670]: time="2025-12-12T19:31:09.784088462Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 19:31:09.785593 containerd[1670]: time="2025-12-12T19:31:09.784101414Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 19:31:09.785593 containerd[1670]: time="2025-12-12T19:31:09.784112505Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 19:31:09.785593 containerd[1670]: time="2025-12-12T19:31:09.784122203Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 19:31:09.785593 containerd[1670]: time="2025-12-12T19:31:09.784153823Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 19:31:09.786026 containerd[1670]: time="2025-12-12T19:31:09.784291538Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 19:31:09.786026 containerd[1670]: time="2025-12-12T19:31:09.784315852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 19:31:09.786026 containerd[1670]: time="2025-12-12T19:31:09.784330451Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 19:31:09.786026 containerd[1670]: time="2025-12-12T19:31:09.784342365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 19:31:09.786026 containerd[1670]: time="2025-12-12T19:31:09.784353855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 19:31:09.786026 containerd[1670]: time="2025-12-12T19:31:09.784365673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 19:31:09.786026 containerd[1670]: time="2025-12-12T19:31:09.784381073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 19:31:09.786026 containerd[1670]: time="2025-12-12T19:31:09.784392009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 19:31:09.786026 containerd[1670]: time="2025-12-12T19:31:09.784402912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 19:31:09.786026 containerd[1670]: time="2025-12-12T19:31:09.784415026Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 19:31:09.786026 containerd[1670]: time="2025-12-12T19:31:09.784425726Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 19:31:09.786026 containerd[1670]: time="2025-12-12T19:31:09.784455689Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 19:31:09.786026 containerd[1670]: time="2025-12-12T19:31:09.784502275Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 19:31:09.786026 containerd[1670]: time="2025-12-12T19:31:09.784519284Z" level=info msg="Start snapshots syncer" Dec 12 19:31:09.786026 containerd[1670]: time="2025-12-12T19:31:09.784564815Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 19:31:09.786379 containerd[1670]: time="2025-12-12T19:31:09.784920801Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 19:31:09.786379 containerd[1670]: time="2025-12-12T19:31:09.784971219Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 19:31:09.786564 containerd[1670]: time="2025-12-12T19:31:09.785059326Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 19:31:09.786564 containerd[1670]: time="2025-12-12T19:31:09.785160223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 19:31:09.786564 containerd[1670]: time="2025-12-12T19:31:09.785180936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 19:31:09.786564 containerd[1670]: time="2025-12-12T19:31:09.785193633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 19:31:09.786564 containerd[1670]: time="2025-12-12T19:31:09.785206754Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 19:31:09.786564 containerd[1670]: time="2025-12-12T19:31:09.785223551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 19:31:09.786564 containerd[1670]: time="2025-12-12T19:31:09.785235648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 19:31:09.786564 containerd[1670]: time="2025-12-12T19:31:09.785257838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 19:31:09.786564 containerd[1670]: time="2025-12-12T19:31:09.785270564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 19:31:09.786564 containerd[1670]: time="2025-12-12T19:31:09.785281922Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 19:31:09.786564 containerd[1670]: time="2025-12-12T19:31:09.785315444Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 19:31:09.786564 containerd[1670]: time="2025-12-12T19:31:09.785330144Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 19:31:09.786564 containerd[1670]: time="2025-12-12T19:31:09.785339513Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 19:31:09.786880 containerd[1670]: time="2025-12-12T19:31:09.785352298Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 19:31:09.786880 containerd[1670]: time="2025-12-12T19:31:09.785360991Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 19:31:09.786880 containerd[1670]: time="2025-12-12T19:31:09.785370719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 19:31:09.786880 containerd[1670]: time="2025-12-12T19:31:09.785381529Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 19:31:09.786880 containerd[1670]: time="2025-12-12T19:31:09.785402584Z" level=info msg="runtime interface created" Dec 12 19:31:09.786880 containerd[1670]: time="2025-12-12T19:31:09.785408422Z" level=info msg="created NRI interface" Dec 12 19:31:09.786880 containerd[1670]: time="2025-12-12T19:31:09.785416992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 19:31:09.786880 containerd[1670]: time="2025-12-12T19:31:09.785429871Z" level=info msg="Connect containerd service" Dec 12 19:31:09.786880 containerd[1670]: time="2025-12-12T19:31:09.785453246Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 19:31:09.788158 systemd-logind[1642]: Watching system buttons on /dev/input/event3 (Power Button) Dec 12 19:31:09.788185 systemd-logind[1642]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 19:31:09.796013 dbus-daemon[1631]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 19:31:09.798792 containerd[1670]: time="2025-12-12T19:31:09.797010829Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 19:31:09.800059 dbus-daemon[1631]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1689 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 19:31:09.850191 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 19:31:09.855403 systemd-logind[1642]: New seat seat0. Dec 12 19:31:09.862804 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 19:31:09.866213 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 19:31:09.908856 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 19:31:09.917108 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 19:31:09.976799 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 19:31:09.977091 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 19:31:09.983742 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 19:31:09.997782 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 19:31:10.001428 containerd[1670]: time="2025-12-12T19:31:10.001286524Z" level=info msg="Start subscribing containerd event" Dec 12 19:31:10.001533 containerd[1670]: time="2025-12-12T19:31:10.001467113Z" level=info msg="Start recovering state" Dec 12 19:31:10.003478 containerd[1670]: time="2025-12-12T19:31:10.001719404Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 19:31:10.004846 containerd[1670]: time="2025-12-12T19:31:10.002009596Z" level=info msg="Start event monitor" Dec 12 19:31:10.004846 containerd[1670]: time="2025-12-12T19:31:10.004559340Z" level=info msg="Start cni network conf syncer for default" Dec 12 19:31:10.004846 containerd[1670]: time="2025-12-12T19:31:10.004569075Z" level=info msg="Start streaming server" Dec 12 19:31:10.004846 containerd[1670]: time="2025-12-12T19:31:10.004592194Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 19:31:10.004846 containerd[1670]: time="2025-12-12T19:31:10.004600941Z" level=info msg="runtime interface starting up..." Dec 12 19:31:10.004846 containerd[1670]: time="2025-12-12T19:31:10.004607539Z" level=info msg="starting plugins..." Dec 12 19:31:10.005625 containerd[1670]: time="2025-12-12T19:31:10.004632251Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 19:31:10.005913 containerd[1670]: time="2025-12-12T19:31:10.004752835Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 19:31:10.007035 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 19:31:10.009741 containerd[1670]: time="2025-12-12T19:31:10.009488700Z" level=info msg="containerd successfully booted in 0.322166s" Dec 12 19:31:10.021871 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 19:31:10.027380 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 19:31:10.032907 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 19:31:10.033996 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 19:31:10.047633 polkitd[1750]: Started polkitd version 126 Dec 12 19:31:10.052381 polkitd[1750]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 19:31:10.052842 polkitd[1750]: Loading rules from directory /run/polkit-1/rules.d Dec 12 19:31:10.052891 polkitd[1750]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 19:31:10.053210 polkitd[1750]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 19:31:10.053238 polkitd[1750]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 19:31:10.053273 polkitd[1750]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 19:31:10.053986 polkitd[1750]: Finished loading, compiling and executing 2 rules Dec 12 19:31:10.055298 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 19:31:10.055513 dbus-daemon[1631]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 19:31:10.055779 polkitd[1750]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 19:31:10.068798 systemd-hostnamed[1689]: Hostname set to (static) Dec 12 19:31:10.295945 tar[1648]: linux-amd64/README.md Dec 12 19:31:10.317904 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 19:31:10.857717 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:31:10.863860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:31:10.869758 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:31:10.873940 (kubelet)[1785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 19:31:11.172293 systemd-timesyncd[1548]: Network configuration changed, trying to establish connection. Dec 12 19:31:11.173304 systemd-networkd[1573]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:1948:24:19ff:fef4:6522/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:1948:24:19ff:fef4:6522/64 assigned by NDisc. Dec 12 19:31:11.173314 systemd-networkd[1573]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 12 19:31:11.404684 kubelet[1785]: E1212 19:31:11.404622 1785 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 19:31:11.408544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 19:31:11.408733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 19:31:11.409472 systemd[1]: kubelet.service: Consumed 1.132s CPU time, 265.5M memory peak. Dec 12 19:31:12.800001 systemd-timesyncd[1548]: Network configuration changed, trying to establish connection. Dec 12 19:31:12.877878 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:31:12.883720 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:31:15.136496 login[1766]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 12 19:31:15.156043 systemd-logind[1642]: New session 1 of user core. Dec 12 19:31:15.158078 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 19:31:15.159559 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 19:31:15.185628 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 19:31:15.188345 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 19:31:15.206369 (systemd)[1800]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 19:31:15.210703 systemd-logind[1642]: New session c1 of user core. Dec 12 19:31:15.356932 systemd[1800]: Queued start job for default target default.target. Dec 12 19:31:15.369823 systemd[1800]: Created slice app.slice - User Application Slice. Dec 12 19:31:15.370075 systemd[1800]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 19:31:15.370098 systemd[1800]: Reached target paths.target - Paths. Dec 12 19:31:15.370160 systemd[1800]: Reached target timers.target - Timers. Dec 12 19:31:15.372036 systemd[1800]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 19:31:15.373828 systemd[1800]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 19:31:15.397695 systemd[1800]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 19:31:15.397796 systemd[1800]: Reached target sockets.target - Sockets. Dec 12 19:31:15.400409 systemd[1800]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 19:31:15.400533 systemd[1800]: Reached target basic.target - Basic System. Dec 12 19:31:15.400603 systemd[1800]: Reached target default.target - Main User Target. Dec 12 19:31:15.400646 systemd[1800]: Startup finished in 179ms. Dec 12 19:31:15.400857 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 19:31:15.409026 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 19:31:15.444500 login[1767]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 12 19:31:15.453960 systemd-logind[1642]: New session 2 of user core. Dec 12 19:31:15.459971 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 19:31:15.971041 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 19:31:15.972895 systemd[1]: Started sshd@0-10.244.101.34:22-139.178.89.65:55732.service - OpenSSH per-connection server daemon (139.178.89.65:55732). Dec 12 19:31:16.774642 sshd[1834]: Accepted publickey for core from 139.178.89.65 port 55732 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:31:16.777306 sshd-session[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:31:16.788592 systemd-logind[1642]: New session 3 of user core. Dec 12 19:31:16.802033 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 19:31:16.896713 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:31:16.903698 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 19:31:16.912993 coreos-metadata[1715]: Dec 12 19:31:16.911 WARN failed to locate config-drive, using the metadata service API instead Dec 12 19:31:16.918044 coreos-metadata[1630]: Dec 12 19:31:16.917 WARN failed to locate config-drive, using the metadata service API instead Dec 12 19:31:16.931105 coreos-metadata[1715]: Dec 12 19:31:16.931 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 12 19:31:16.935138 coreos-metadata[1630]: Dec 12 19:31:16.934 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 12 19:31:16.941015 coreos-metadata[1630]: Dec 12 19:31:16.940 INFO Fetch failed with 404: resource not found Dec 12 19:31:16.941229 coreos-metadata[1630]: Dec 12 19:31:16.941 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 12 19:31:16.941891 coreos-metadata[1630]: Dec 12 19:31:16.941 INFO Fetch successful Dec 12 19:31:16.942069 coreos-metadata[1630]: Dec 12 19:31:16.942 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 12 19:31:16.955546 coreos-metadata[1630]: Dec 12 19:31:16.955 INFO Fetch successful Dec 12 19:31:16.955987 coreos-metadata[1630]: Dec 12 19:31:16.955 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 12 19:31:16.957859 coreos-metadata[1715]: Dec 12 19:31:16.957 INFO Fetch successful Dec 12 19:31:16.957997 coreos-metadata[1715]: Dec 12 19:31:16.957 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 12 19:31:16.973879 coreos-metadata[1630]: Dec 12 19:31:16.973 INFO Fetch successful Dec 12 19:31:16.974480 coreos-metadata[1630]: Dec 12 19:31:16.974 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 12 19:31:16.986224 coreos-metadata[1715]: Dec 12 19:31:16.986 INFO Fetch successful Dec 12 19:31:16.987813 coreos-metadata[1630]: Dec 12 19:31:16.987 INFO Fetch successful Dec 12 19:31:16.988277 unknown[1715]: wrote ssh authorized keys file for user: core Dec 12 19:31:16.989233 coreos-metadata[1630]: Dec 12 19:31:16.988 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 12 19:31:17.004349 coreos-metadata[1630]: Dec 12 19:31:17.004 INFO Fetch successful Dec 12 19:31:17.013296 update-ssh-keys[1843]: Updated "/home/core/.ssh/authorized_keys" Dec 12 19:31:17.015074 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 19:31:17.018949 systemd[1]: Finished sshkeys.service. Dec 12 19:31:17.040902 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 19:31:17.042454 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 19:31:17.042871 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 19:31:17.043102 systemd[1]: Startup finished in 3.194s (kernel) + 14.276s (initrd) + 11.058s (userspace) = 28.529s. Dec 12 19:31:17.377642 systemd[1]: Started sshd@1-10.244.101.34:22-139.178.89.65:55738.service - OpenSSH per-connection server daemon (139.178.89.65:55738). Dec 12 19:31:18.155932 sshd[1853]: Accepted publickey for core from 139.178.89.65 port 55738 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:31:18.158639 sshd-session[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:31:18.173368 systemd-logind[1642]: New session 4 of user core. Dec 12 19:31:18.182414 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 19:31:18.603078 sshd[1856]: Connection closed by 139.178.89.65 port 55738 Dec 12 19:31:18.602744 sshd-session[1853]: pam_unix(sshd:session): session closed for user core Dec 12 19:31:18.612118 systemd[1]: sshd@1-10.244.101.34:22-139.178.89.65:55738.service: Deactivated successfully. Dec 12 19:31:18.615064 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 19:31:18.617877 systemd-logind[1642]: Session 4 logged out. Waiting for processes to exit. Dec 12 19:31:18.619036 systemd-logind[1642]: Removed session 4. Dec 12 19:31:18.762987 systemd[1]: Started sshd@2-10.244.101.34:22-139.178.89.65:55752.service - OpenSSH per-connection server daemon (139.178.89.65:55752). Dec 12 19:31:19.554183 sshd[1862]: Accepted publickey for core from 139.178.89.65 port 55752 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:31:19.555843 sshd-session[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:31:19.561729 systemd-logind[1642]: New session 5 of user core. Dec 12 19:31:19.578104 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 19:31:19.989584 sshd[1865]: Connection closed by 139.178.89.65 port 55752 Dec 12 19:31:19.990338 sshd-session[1862]: pam_unix(sshd:session): session closed for user core Dec 12 19:31:19.995134 systemd[1]: sshd@2-10.244.101.34:22-139.178.89.65:55752.service: Deactivated successfully. Dec 12 19:31:19.997941 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 19:31:19.999416 systemd-logind[1642]: Session 5 logged out. Waiting for processes to exit. Dec 12 19:31:20.000408 systemd-logind[1642]: Removed session 5. Dec 12 19:31:20.143865 systemd[1]: Started sshd@3-10.244.101.34:22-139.178.89.65:56018.service - OpenSSH per-connection server daemon (139.178.89.65:56018). Dec 12 19:31:20.940627 sshd[1871]: Accepted publickey for core from 139.178.89.65 port 56018 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:31:20.942368 sshd-session[1871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:31:20.950249 systemd-logind[1642]: New session 6 of user core. Dec 12 19:31:20.957936 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 19:31:21.384453 sshd[1874]: Connection closed by 139.178.89.65 port 56018 Dec 12 19:31:21.384232 sshd-session[1871]: pam_unix(sshd:session): session closed for user core Dec 12 19:31:21.390518 systemd-logind[1642]: Session 6 logged out. Waiting for processes to exit. Dec 12 19:31:21.391238 systemd[1]: sshd@3-10.244.101.34:22-139.178.89.65:56018.service: Deactivated successfully. Dec 12 19:31:21.393808 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 19:31:21.396144 systemd-logind[1642]: Removed session 6. Dec 12 19:31:21.543913 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 19:31:21.547286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:31:21.551136 systemd[1]: Started sshd@4-10.244.101.34:22-139.178.89.65:56032.service - OpenSSH per-connection server daemon (139.178.89.65:56032). Dec 12 19:31:21.719444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:31:21.736207 (kubelet)[1891]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 19:31:21.787985 kubelet[1891]: E1212 19:31:21.787931 1891 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 19:31:21.792204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 19:31:21.792373 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 19:31:21.793072 systemd[1]: kubelet.service: Consumed 187ms CPU time, 111M memory peak. Dec 12 19:31:22.323171 sshd[1881]: Accepted publickey for core from 139.178.89.65 port 56032 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:31:22.324536 sshd-session[1881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:31:22.330127 systemd-logind[1642]: New session 7 of user core. Dec 12 19:31:22.337950 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 19:31:22.640768 sudo[1898]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 19:31:22.641048 sudo[1898]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 19:31:22.656871 sudo[1898]: pam_unix(sudo:session): session closed for user root Dec 12 19:31:22.803705 sshd[1897]: Connection closed by 139.178.89.65 port 56032 Dec 12 19:31:22.803425 sshd-session[1881]: pam_unix(sshd:session): session closed for user core Dec 12 19:31:22.812451 systemd[1]: sshd@4-10.244.101.34:22-139.178.89.65:56032.service: Deactivated successfully. Dec 12 19:31:22.813519 systemd-logind[1642]: Session 7 logged out. Waiting for processes to exit. Dec 12 19:31:22.816066 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 19:31:22.820083 systemd-logind[1642]: Removed session 7. Dec 12 19:31:22.969087 systemd[1]: Started sshd@5-10.244.101.34:22-139.178.89.65:56040.service - OpenSSH per-connection server daemon (139.178.89.65:56040). Dec 12 19:31:23.780263 sshd[1904]: Accepted publickey for core from 139.178.89.65 port 56040 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:31:23.781912 sshd-session[1904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:31:23.790992 systemd-logind[1642]: New session 8 of user core. Dec 12 19:31:23.799927 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 19:31:24.081010 sudo[1909]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 19:31:24.081295 sudo[1909]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 19:31:24.086744 sudo[1909]: pam_unix(sudo:session): session closed for user root Dec 12 19:31:24.094429 sudo[1908]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 19:31:24.095095 sudo[1908]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 19:31:24.107262 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 19:31:24.165000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 19:31:24.170431 kernel: kauditd_printk_skb: 115 callbacks suppressed Dec 12 19:31:24.170478 kernel: audit: type=1305 audit(1765567884.165:227): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 19:31:24.170509 augenrules[1931]: No rules Dec 12 19:31:24.172019 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 19:31:24.172444 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 19:31:24.165000 audit[1931]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdf856ec20 a2=420 a3=0 items=0 ppid=1912 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:24.174649 kernel: audit: type=1300 audit(1765567884.165:227): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdf856ec20 a2=420 a3=0 items=0 ppid=1912 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:24.177681 kernel: audit: type=1327 audit(1765567884.165:227): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 19:31:24.165000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 19:31:24.177026 sudo[1908]: pam_unix(sudo:session): session closed for user root Dec 12 19:31:24.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:24.179094 kernel: audit: type=1130 audit(1765567884.172:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:24.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:24.181244 kernel: audit: type=1131 audit(1765567884.172:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:24.176000 audit[1908]: USER_END pid=1908 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 19:31:24.183424 kernel: audit: type=1106 audit(1765567884.176:230): pid=1908 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 19:31:24.176000 audit[1908]: CRED_DISP pid=1908 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 19:31:24.186046 kernel: audit: type=1104 audit(1765567884.176:231): pid=1908 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 19:31:24.324887 sshd[1907]: Connection closed by 139.178.89.65 port 56040 Dec 12 19:31:24.328132 sshd-session[1904]: pam_unix(sshd:session): session closed for user core Dec 12 19:31:24.332000 audit[1904]: USER_END pid=1904 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:31:24.339680 kernel: audit: type=1106 audit(1765567884.332:232): pid=1904 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:31:24.340126 systemd[1]: sshd@5-10.244.101.34:22-139.178.89.65:56040.service: Deactivated successfully. Dec 12 19:31:24.333000 audit[1904]: CRED_DISP pid=1904 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:31:24.342485 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 19:31:24.344251 systemd-logind[1642]: Session 8 logged out. Waiting for processes to exit. Dec 12 19:31:24.344697 kernel: audit: type=1104 audit(1765567884.333:233): pid=1904 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:31:24.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.244.101.34:22-139.178.89.65:56040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:24.346787 systemd-logind[1642]: Removed session 8. Dec 12 19:31:24.348696 kernel: audit: type=1131 audit(1765567884.339:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.244.101.34:22-139.178.89.65:56040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:24.493415 systemd[1]: Started sshd@6-10.244.101.34:22-139.178.89.65:56056.service - OpenSSH per-connection server daemon (139.178.89.65:56056). Dec 12 19:31:24.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.244.101.34:22-139.178.89.65:56056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:25.296000 audit[1940]: USER_ACCT pid=1940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:31:25.297931 sshd[1940]: Accepted publickey for core from 139.178.89.65 port 56056 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:31:25.298000 audit[1940]: CRED_ACQ pid=1940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:31:25.298000 audit[1940]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff644a41c0 a2=3 a3=0 items=0 ppid=1 pid=1940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:25.298000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:31:25.300187 sshd-session[1940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:31:25.305562 systemd-logind[1642]: New session 9 of user core. Dec 12 19:31:25.317243 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 19:31:25.322000 audit[1940]: USER_START pid=1940 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:31:25.324000 audit[1943]: CRED_ACQ pid=1943 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:31:25.604000 audit[1944]: USER_ACCT pid=1944 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 19:31:25.604000 audit[1944]: CRED_REFR pid=1944 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 19:31:25.605360 sudo[1944]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 19:31:25.605688 sudo[1944]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 19:31:25.608000 audit[1944]: USER_START pid=1944 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 19:31:26.059941 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 19:31:26.072263 (dockerd)[1962]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 19:31:26.394275 dockerd[1962]: time="2025-12-12T19:31:26.394151262Z" level=info msg="Starting up" Dec 12 19:31:26.398964 dockerd[1962]: time="2025-12-12T19:31:26.398936474Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 19:31:26.415459 dockerd[1962]: time="2025-12-12T19:31:26.415393204Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 19:31:26.455495 dockerd[1962]: time="2025-12-12T19:31:26.455449558Z" level=info msg="Loading containers: start." Dec 12 19:31:26.471686 kernel: Initializing XFRM netlink socket Dec 12 19:31:26.545000 audit[2012]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.545000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff66a5a330 a2=0 a3=0 items=0 ppid=1962 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.545000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 19:31:26.547000 audit[2014]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.547000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffffc97480 a2=0 a3=0 items=0 ppid=1962 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.547000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 19:31:26.549000 audit[2016]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.549000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe84e956b0 a2=0 a3=0 items=0 ppid=1962 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.549000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 19:31:26.551000 audit[2018]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.551000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef1fbef00 a2=0 a3=0 items=0 ppid=1962 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.551000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 19:31:26.553000 audit[2020]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.553000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd77ae7460 a2=0 a3=0 items=0 ppid=1962 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.553000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 19:31:26.558000 audit[2022]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.558000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe525bcb30 a2=0 a3=0 items=0 ppid=1962 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.558000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 19:31:26.564000 audit[2024]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.564000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe8e0736f0 a2=0 a3=0 items=0 ppid=1962 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.564000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 19:31:26.568000 audit[2026]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.568000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc8653c1a0 a2=0 a3=0 items=0 ppid=1962 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.568000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 19:31:26.603000 audit[2029]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.603000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffee66f3230 a2=0 a3=0 items=0 ppid=1962 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.603000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 19:31:26.605000 audit[2031]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.605000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffda93cdcf0 a2=0 a3=0 items=0 ppid=1962 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.605000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 19:31:26.608000 audit[2033]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.608000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffec6a8b440 a2=0 a3=0 items=0 ppid=1962 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.608000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 19:31:26.611000 audit[2035]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.611000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff147e8100 a2=0 a3=0 items=0 ppid=1962 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.611000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 19:31:26.613000 audit[2037]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.613000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdef0ad060 a2=0 a3=0 items=0 ppid=1962 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.613000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 19:31:26.662000 audit[2067]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2067 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.662000 audit[2067]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdc7e66e00 a2=0 a3=0 items=0 ppid=1962 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.662000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 19:31:26.664000 audit[2069]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2069 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.664000 audit[2069]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe89247b80 a2=0 a3=0 items=0 ppid=1962 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.664000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 19:31:26.667000 audit[2071]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.667000 audit[2071]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4831e430 a2=0 a3=0 items=0 ppid=1962 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.667000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 19:31:26.669000 audit[2073]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2073 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.669000 audit[2073]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0fde6150 a2=0 a3=0 items=0 ppid=1962 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.669000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 19:31:26.672000 audit[2075]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.672000 audit[2075]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffb01a0040 a2=0 a3=0 items=0 ppid=1962 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.672000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 19:31:26.674000 audit[2077]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.674000 audit[2077]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcb9f300c0 a2=0 a3=0 items=0 ppid=1962 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.674000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 19:31:26.677000 audit[2079]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.677000 audit[2079]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe2cdaf2e0 a2=0 a3=0 items=0 ppid=1962 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.677000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 19:31:26.680000 audit[2081]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.680000 audit[2081]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffcb9c55560 a2=0 a3=0 items=0 ppid=1962 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.680000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 19:31:26.684000 audit[2083]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.684000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd3b77c560 a2=0 a3=0 items=0 ppid=1962 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.684000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 19:31:26.686000 audit[2085]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.686000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc8a748240 a2=0 a3=0 items=0 ppid=1962 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.686000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 19:31:26.689000 audit[2087]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.689000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe4e7188e0 a2=0 a3=0 items=0 ppid=1962 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.689000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 19:31:26.692000 audit[2089]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.692000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc51f2af90 a2=0 a3=0 items=0 ppid=1962 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.692000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 19:31:26.694000 audit[2091]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.694000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe0d8676a0 a2=0 a3=0 items=0 ppid=1962 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 19:31:26.701000 audit[2096]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.701000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb0815bc0 a2=0 a3=0 items=0 ppid=1962 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.701000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 19:31:26.704000 audit[2098]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.704000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd3c4e1d40 a2=0 a3=0 items=0 ppid=1962 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.704000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 19:31:26.707000 audit[2100]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.707000 audit[2100]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff6263b9e0 a2=0 a3=0 items=0 ppid=1962 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.707000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 19:31:26.710000 audit[2102]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.710000 audit[2102]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe2ec19af0 a2=0 a3=0 items=0 ppid=1962 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.710000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 19:31:26.713000 audit[2104]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.713000 audit[2104]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffebced90a0 a2=0 a3=0 items=0 ppid=1962 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.713000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 19:31:26.716000 audit[2106]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:26.716000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd4cc05590 a2=0 a3=0 items=0 ppid=1962 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.716000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 19:31:26.722835 systemd-timesyncd[1548]: Network configuration changed, trying to establish connection. Dec 12 19:31:26.735000 audit[2111]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.735000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffdd9ac8ef0 a2=0 a3=0 items=0 ppid=1962 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.735000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 19:31:26.738000 audit[2114]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.738000 audit[2114]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc4348a3b0 a2=0 a3=0 items=0 ppid=1962 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.738000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 19:31:26.748000 audit[2122]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.748000 audit[2122]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffe9184ad40 a2=0 a3=0 items=0 ppid=1962 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.748000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 19:31:26.758000 audit[2128]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.758000 audit[2128]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc0b342d80 a2=0 a3=0 items=0 ppid=1962 pid=2128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.758000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 19:31:26.761000 audit[2130]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.761000 audit[2130]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fffbf52cb20 a2=0 a3=0 items=0 ppid=1962 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.761000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 19:31:26.763000 audit[2132]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.763000 audit[2132]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe861e4160 a2=0 a3=0 items=0 ppid=1962 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.763000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 19:31:26.766000 audit[2134]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.766000 audit[2134]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffea0079260 a2=0 a3=0 items=0 ppid=1962 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.766000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 19:31:26.769000 audit[2136]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:26.769000 audit[2136]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd05b9b4f0 a2=0 a3=0 items=0 ppid=1962 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:26.769000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 19:31:26.770389 systemd-networkd[1573]: docker0: Link UP Dec 12 19:31:26.772245 dockerd[1962]: time="2025-12-12T19:31:26.772210939Z" level=info msg="Loading containers: done." Dec 12 19:31:26.787258 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck663162758-merged.mount: Deactivated successfully. Dec 12 19:31:26.792049 dockerd[1962]: time="2025-12-12T19:31:26.791691377Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 19:31:26.792049 dockerd[1962]: time="2025-12-12T19:31:26.791789797Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 19:31:26.792049 dockerd[1962]: time="2025-12-12T19:31:26.791878563Z" level=info msg="Initializing buildkit" Dec 12 19:31:26.809689 dockerd[1962]: time="2025-12-12T19:31:26.809611131Z" level=info msg="Completed buildkit initialization" Dec 12 19:31:26.817951 dockerd[1962]: time="2025-12-12T19:31:26.817893116Z" level=info msg="Daemon has completed initialization" Dec 12 19:31:26.818205 dockerd[1962]: time="2025-12-12T19:31:26.817984291Z" level=info msg="API listen on /run/docker.sock" Dec 12 19:31:26.818367 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 19:31:26.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:27.634638 systemd-resolved[1333]: Clock change detected. Flushing caches. Dec 12 19:31:27.635358 systemd-timesyncd[1548]: Contacted time server [2a02:8010:d015::123]:123 (2.flatcar.pool.ntp.org). Dec 12 19:31:27.635412 systemd-timesyncd[1548]: Initial clock synchronization to Fri 2025-12-12 19:31:27.634536 UTC. Dec 12 19:31:28.898747 containerd[1670]: time="2025-12-12T19:31:28.898635627Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 19:31:29.796938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597643064.mount: Deactivated successfully. Dec 12 19:31:31.113463 containerd[1670]: time="2025-12-12T19:31:31.113373677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:31.115105 containerd[1670]: time="2025-12-12T19:31:31.114888369Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Dec 12 19:31:31.115621 containerd[1670]: time="2025-12-12T19:31:31.115592732Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:31.118229 containerd[1670]: time="2025-12-12T19:31:31.118195864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:31.119294 containerd[1670]: time="2025-12-12T19:31:31.119262931Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.220522557s" Dec 12 19:31:31.119431 containerd[1670]: time="2025-12-12T19:31:31.119416017Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 12 19:31:31.120532 containerd[1670]: time="2025-12-12T19:31:31.120504741Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 19:31:32.572253 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 19:31:32.575715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:31:32.788696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:31:32.794597 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 12 19:31:32.794782 kernel: audit: type=1130 audit(1765567892.787:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:32.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:32.804352 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 19:31:32.869975 kubelet[2247]: E1212 19:31:32.869798 2247 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 19:31:32.873981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 19:31:32.874295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 19:31:32.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 19:31:32.875081 systemd[1]: kubelet.service: Consumed 193ms CPU time, 110.5M memory peak. Dec 12 19:31:32.877458 kernel: audit: type=1131 audit(1765567892.873:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 19:31:33.034176 containerd[1670]: time="2025-12-12T19:31:33.034091291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:33.035580 containerd[1670]: time="2025-12-12T19:31:33.035537264Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Dec 12 19:31:33.035915 containerd[1670]: time="2025-12-12T19:31:33.035881713Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:33.038762 containerd[1670]: time="2025-12-12T19:31:33.038628628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:33.040415 containerd[1670]: time="2025-12-12T19:31:33.040323527Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.919785414s" Dec 12 19:31:33.040415 containerd[1670]: time="2025-12-12T19:31:33.040367891Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 12 19:31:33.041099 containerd[1670]: time="2025-12-12T19:31:33.040979462Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 19:31:34.481794 containerd[1670]: time="2025-12-12T19:31:34.481694365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:34.482984 containerd[1670]: time="2025-12-12T19:31:34.482937217Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Dec 12 19:31:34.483827 containerd[1670]: time="2025-12-12T19:31:34.483658854Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:34.487221 containerd[1670]: time="2025-12-12T19:31:34.486217797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:34.487727 containerd[1670]: time="2025-12-12T19:31:34.487695989Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.446689084s" Dec 12 19:31:34.487727 containerd[1670]: time="2025-12-12T19:31:34.487726744Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 12 19:31:34.488738 containerd[1670]: time="2025-12-12T19:31:34.488535475Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 19:31:36.760286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2599871160.mount: Deactivated successfully. Dec 12 19:31:37.307892 containerd[1670]: time="2025-12-12T19:31:37.307814867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:37.309005 containerd[1670]: time="2025-12-12T19:31:37.308809177Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Dec 12 19:31:37.309502 containerd[1670]: time="2025-12-12T19:31:37.309475146Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:37.310987 containerd[1670]: time="2025-12-12T19:31:37.310959736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:37.311738 containerd[1670]: time="2025-12-12T19:31:37.311708837Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.822205451s" Dec 12 19:31:37.311854 containerd[1670]: time="2025-12-12T19:31:37.311837713Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 12 19:31:37.312467 containerd[1670]: time="2025-12-12T19:31:37.312432182Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 19:31:37.983190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3749379916.mount: Deactivated successfully. Dec 12 19:31:38.858129 containerd[1670]: time="2025-12-12T19:31:38.858018743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:38.859396 containerd[1670]: time="2025-12-12T19:31:38.859174588Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Dec 12 19:31:38.862179 containerd[1670]: time="2025-12-12T19:31:38.862086867Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:38.864930 containerd[1670]: time="2025-12-12T19:31:38.864034150Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.551533558s" Dec 12 19:31:38.864930 containerd[1670]: time="2025-12-12T19:31:38.864115001Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 12 19:31:38.865967 containerd[1670]: time="2025-12-12T19:31:38.865931965Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 19:31:38.867456 containerd[1670]: time="2025-12-12T19:31:38.866940316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:39.873233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684786513.mount: Deactivated successfully. Dec 12 19:31:39.879469 containerd[1670]: time="2025-12-12T19:31:39.878631027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 19:31:39.881383 containerd[1670]: time="2025-12-12T19:31:39.881341739Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 19:31:39.882556 containerd[1670]: time="2025-12-12T19:31:39.882521358Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 19:31:39.885471 containerd[1670]: time="2025-12-12T19:31:39.885310022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 19:31:39.888238 containerd[1670]: time="2025-12-12T19:31:39.887813113Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.021709558s" Dec 12 19:31:39.888238 containerd[1670]: time="2025-12-12T19:31:39.887854160Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 19:31:39.890127 containerd[1670]: time="2025-12-12T19:31:39.890097828Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 19:31:40.537841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4190853960.mount: Deactivated successfully. Dec 12 19:31:41.893726 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 19:31:41.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:41.902059 kernel: audit: type=1131 audit(1765567901.893:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:41.911000 audit: BPF prog-id=61 op=UNLOAD Dec 12 19:31:41.913537 kernel: audit: type=1334 audit(1765567901.911:288): prog-id=61 op=UNLOAD Dec 12 19:31:42.980591 containerd[1670]: time="2025-12-12T19:31:42.980527467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:42.982331 containerd[1670]: time="2025-12-12T19:31:42.982079674Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=56977083" Dec 12 19:31:42.982936 containerd[1670]: time="2025-12-12T19:31:42.982911869Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:42.985517 containerd[1670]: time="2025-12-12T19:31:42.985477996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:31:42.986570 containerd[1670]: time="2025-12-12T19:31:42.986544339Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.096218524s" Dec 12 19:31:42.986680 containerd[1670]: time="2025-12-12T19:31:42.986665068Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 12 19:31:43.074036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 12 19:31:43.079178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:31:43.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:43.279238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:31:43.286857 kernel: audit: type=1130 audit(1765567903.278:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:43.314601 (kubelet)[2394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 19:31:43.407700 kubelet[2394]: E1212 19:31:43.407596 2394 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 19:31:43.411940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 19:31:43.412255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 19:31:43.413923 systemd[1]: kubelet.service: Consumed 256ms CPU time, 109.4M memory peak. Dec 12 19:31:43.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 19:31:43.418505 kernel: audit: type=1131 audit(1765567903.412:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 19:31:47.311711 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:31:47.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:47.313642 systemd[1]: kubelet.service: Consumed 256ms CPU time, 109.4M memory peak. Dec 12 19:31:47.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:47.319096 kernel: audit: type=1130 audit(1765567907.312:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:47.319162 kernel: audit: type=1131 audit(1765567907.312:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:47.322076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:31:47.368064 systemd[1]: Reload requested from client PID 2422 ('systemctl') (unit session-9.scope)... Dec 12 19:31:47.368093 systemd[1]: Reloading... Dec 12 19:31:47.544493 zram_generator::config[2470]: No configuration found. Dec 12 19:31:47.849900 systemd[1]: Reloading finished in 481 ms. Dec 12 19:31:47.875959 kernel: audit: type=1334 audit(1765567907.870:293): prog-id=65 op=LOAD Dec 12 19:31:47.870000 audit: BPF prog-id=65 op=LOAD Dec 12 19:31:47.886686 kernel: audit: type=1334 audit(1765567907.870:294): prog-id=56 op=UNLOAD Dec 12 19:31:47.886835 kernel: audit: type=1334 audit(1765567907.881:295): prog-id=66 op=LOAD Dec 12 19:31:47.886875 kernel: audit: type=1334 audit(1765567907.882:296): prog-id=57 op=UNLOAD Dec 12 19:31:47.886901 kernel: audit: type=1334 audit(1765567907.884:297): prog-id=67 op=LOAD Dec 12 19:31:47.870000 audit: BPF prog-id=56 op=UNLOAD Dec 12 19:31:47.881000 audit: BPF prog-id=66 op=LOAD Dec 12 19:31:47.882000 audit: BPF prog-id=57 op=UNLOAD Dec 12 19:31:47.884000 audit: BPF prog-id=67 op=LOAD Dec 12 19:31:47.893187 kernel: audit: type=1334 audit(1765567907.884:298): prog-id=44 op=UNLOAD Dec 12 19:31:47.893329 kernel: audit: type=1334 audit(1765567907.884:299): prog-id=68 op=LOAD Dec 12 19:31:47.893357 kernel: audit: type=1334 audit(1765567907.884:300): prog-id=69 op=LOAD Dec 12 19:31:47.884000 audit: BPF prog-id=44 op=UNLOAD Dec 12 19:31:47.884000 audit: BPF prog-id=68 op=LOAD Dec 12 19:31:47.884000 audit: BPF prog-id=69 op=LOAD Dec 12 19:31:47.884000 audit: BPF prog-id=45 op=UNLOAD Dec 12 19:31:47.884000 audit: BPF prog-id=46 op=UNLOAD Dec 12 19:31:47.886000 audit: BPF prog-id=70 op=LOAD Dec 12 19:31:47.886000 audit: BPF prog-id=53 op=UNLOAD Dec 12 19:31:47.886000 audit: BPF prog-id=71 op=LOAD Dec 12 19:31:47.886000 audit: BPF prog-id=41 op=UNLOAD Dec 12 19:31:47.888000 audit: BPF prog-id=72 op=LOAD Dec 12 19:31:47.888000 audit: BPF prog-id=73 op=LOAD Dec 12 19:31:47.888000 audit: BPF prog-id=42 op=UNLOAD Dec 12 19:31:47.888000 audit: BPF prog-id=43 op=UNLOAD Dec 12 19:31:47.890000 audit: BPF prog-id=74 op=LOAD Dec 12 19:31:47.890000 audit: BPF prog-id=58 op=UNLOAD Dec 12 19:31:47.890000 audit: BPF prog-id=75 op=LOAD Dec 12 19:31:47.890000 audit: BPF prog-id=76 op=LOAD Dec 12 19:31:47.890000 audit: BPF prog-id=59 op=UNLOAD Dec 12 19:31:47.890000 audit: BPF prog-id=60 op=UNLOAD Dec 12 19:31:47.893000 audit: BPF prog-id=77 op=LOAD Dec 12 19:31:47.893000 audit: BPF prog-id=50 op=UNLOAD Dec 12 19:31:47.893000 audit: BPF prog-id=78 op=LOAD Dec 12 19:31:47.893000 audit: BPF prog-id=79 op=LOAD Dec 12 19:31:47.893000 audit: BPF prog-id=51 op=UNLOAD Dec 12 19:31:47.893000 audit: BPF prog-id=52 op=UNLOAD Dec 12 19:31:47.894000 audit: BPF prog-id=80 op=LOAD Dec 12 19:31:47.894000 audit: BPF prog-id=81 op=LOAD Dec 12 19:31:47.894000 audit: BPF prog-id=54 op=UNLOAD Dec 12 19:31:47.894000 audit: BPF prog-id=55 op=UNLOAD Dec 12 19:31:47.895000 audit: BPF prog-id=82 op=LOAD Dec 12 19:31:47.895000 audit: BPF prog-id=47 op=UNLOAD Dec 12 19:31:47.895000 audit: BPF prog-id=83 op=LOAD Dec 12 19:31:47.895000 audit: BPF prog-id=84 op=LOAD Dec 12 19:31:47.901000 audit: BPF prog-id=48 op=UNLOAD Dec 12 19:31:47.901000 audit: BPF prog-id=49 op=UNLOAD Dec 12 19:31:47.901000 audit: BPF prog-id=85 op=LOAD Dec 12 19:31:47.901000 audit: BPF prog-id=64 op=UNLOAD Dec 12 19:31:47.920205 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 19:31:47.920298 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 19:31:47.920709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:31:47.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 19:31:47.920804 systemd[1]: kubelet.service: Consumed 161ms CPU time, 98.4M memory peak. Dec 12 19:31:47.923919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:31:48.128271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:31:48.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:48.149937 (kubelet)[2538]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 19:31:48.243092 kubelet[2538]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 19:31:48.243864 kubelet[2538]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 19:31:48.243864 kubelet[2538]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 19:31:48.246560 kubelet[2538]: I1212 19:31:48.246109 2538 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 19:31:48.490037 kubelet[2538]: I1212 19:31:48.489909 2538 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 19:31:48.490666 kubelet[2538]: I1212 19:31:48.490216 2538 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 19:31:48.490818 kubelet[2538]: I1212 19:31:48.490801 2538 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 19:31:48.533484 kubelet[2538]: I1212 19:31:48.532331 2538 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 19:31:48.535476 kubelet[2538]: E1212 19:31:48.534680 2538 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.244.101.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.101.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 19:31:48.565123 kubelet[2538]: I1212 19:31:48.565088 2538 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 19:31:48.570580 kubelet[2538]: I1212 19:31:48.570536 2538 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 19:31:48.575781 kubelet[2538]: I1212 19:31:48.575730 2538 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 19:31:48.578463 kubelet[2538]: I1212 19:31:48.575780 2538 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-i3fa2.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 19:31:48.578676 kubelet[2538]: I1212 19:31:48.578487 2538 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 19:31:48.578676 kubelet[2538]: I1212 19:31:48.578502 2538 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 19:31:48.579406 kubelet[2538]: I1212 19:31:48.579382 2538 state_mem.go:36] "Initialized new in-memory state store" Dec 12 19:31:48.582467 kubelet[2538]: I1212 19:31:48.582107 2538 kubelet.go:480] "Attempting to sync node with API server" Dec 12 19:31:48.582467 kubelet[2538]: I1212 19:31:48.582142 2538 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 19:31:48.582467 kubelet[2538]: I1212 19:31:48.582183 2538 kubelet.go:386] "Adding apiserver pod source" Dec 12 19:31:48.582467 kubelet[2538]: I1212 19:31:48.582208 2538 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 19:31:48.598686 kubelet[2538]: I1212 19:31:48.598633 2538 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 19:31:48.602968 kubelet[2538]: I1212 19:31:48.602791 2538 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 19:31:48.604134 kubelet[2538]: W1212 19:31:48.604118 2538 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 19:31:48.607461 kubelet[2538]: E1212 19:31:48.607292 2538 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.101.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.101.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 19:31:48.607461 kubelet[2538]: E1212 19:31:48.607423 2538 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.101.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-i3fa2.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.101.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 19:31:48.610403 kubelet[2538]: I1212 19:31:48.610380 2538 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 19:31:48.610597 kubelet[2538]: I1212 19:31:48.610587 2538 server.go:1289] "Started kubelet" Dec 12 19:31:48.617360 kubelet[2538]: I1212 19:31:48.617153 2538 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 19:31:48.620602 kubelet[2538]: E1212 19:31:48.614646 2538 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.101.34:6443/api/v1/namespaces/default/events\": dial tcp 10.244.101.34:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-i3fa2.gb1.brightbox.com.18808ea8938f79b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-i3fa2.gb1.brightbox.com,UID:srv-i3fa2.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-i3fa2.gb1.brightbox.com,},FirstTimestamp:2025-12-12 19:31:48.610537909 +0000 UTC m=+0.453591140,LastTimestamp:2025-12-12 19:31:48.610537909 +0000 UTC m=+0.453591140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-i3fa2.gb1.brightbox.com,}" Dec 12 19:31:48.620806 kubelet[2538]: I1212 19:31:48.620774 2538 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 19:31:48.626594 kubelet[2538]: I1212 19:31:48.626355 2538 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 19:31:48.627691 kubelet[2538]: E1212 19:31:48.627666 2538 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-i3fa2.gb1.brightbox.com\" not found" Dec 12 19:31:48.628836 kubelet[2538]: I1212 19:31:48.628289 2538 server.go:317] "Adding debug handlers to kubelet server" Dec 12 19:31:48.632345 kubelet[2538]: I1212 19:31:48.632324 2538 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 19:31:48.632514 kubelet[2538]: I1212 19:31:48.632504 2538 reconciler.go:26] "Reconciler: start to sync state" Dec 12 19:31:48.636481 kubelet[2538]: I1212 19:31:48.636395 2538 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 19:31:48.637485 kubelet[2538]: I1212 19:31:48.636666 2538 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 19:31:48.637753 kubelet[2538]: I1212 19:31:48.637724 2538 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 19:31:48.643002 kubelet[2538]: E1212 19:31:48.642957 2538 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.101.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.101.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 19:31:48.643193 kubelet[2538]: E1212 19:31:48.643035 2538 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.101.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-i3fa2.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.101.34:6443: connect: connection refused" interval="200ms" Dec 12 19:31:48.642000 audit[2554]: NETFILTER_CFG table=mangle:42 family=10 entries=2 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:48.642000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffed46a4730 a2=0 a3=0 items=0 ppid=2538 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:48.642000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 19:31:48.644603 kubelet[2538]: I1212 19:31:48.644577 2538 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 19:31:48.644000 audit[2556]: NETFILTER_CFG table=mangle:43 family=10 entries=1 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:48.644000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9a2375a0 a2=0 a3=0 items=0 ppid=2538 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:48.644000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 19:31:48.645000 audit[2555]: NETFILTER_CFG table=mangle:44 family=2 entries=2 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:48.645000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc9e319c90 a2=0 a3=0 items=0 ppid=2538 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:48.645000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 19:31:48.648708 kubelet[2538]: E1212 19:31:48.648642 2538 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 19:31:48.649456 kubelet[2538]: I1212 19:31:48.649052 2538 factory.go:223] Registration of the containerd container factory successfully Dec 12 19:31:48.649456 kubelet[2538]: I1212 19:31:48.649071 2538 factory.go:223] Registration of the systemd container factory successfully Dec 12 19:31:48.649456 kubelet[2538]: I1212 19:31:48.649138 2538 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 19:31:48.649000 audit[2557]: NETFILTER_CFG table=nat:45 family=10 entries=1 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:48.649000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc755e3580 a2=0 a3=0 items=0 ppid=2538 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:48.649000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 19:31:48.651000 audit[2558]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:48.651000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4f663de0 a2=0 a3=0 items=0 ppid=2538 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:48.651000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 19:31:48.666000 audit[2564]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_chain pid=2564 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:31:48.666000 audit[2564]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6cfc8fd0 a2=0 a3=0 items=0 ppid=2538 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:48.666000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 19:31:48.668000 audit[2565]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=2565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:48.668000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff2de4a490 a2=0 a3=0 items=0 ppid=2538 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:48.668000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 19:31:48.671030 kubelet[2538]: I1212 19:31:48.671006 2538 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 19:31:48.671030 kubelet[2538]: I1212 19:31:48.671023 2538 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 19:31:48.671166 kubelet[2538]: I1212 19:31:48.671044 2538 state_mem.go:36] "Initialized new in-memory state store" Dec 12 19:31:48.672130 kubelet[2538]: I1212 19:31:48.672112 2538 policy_none.go:49] "None policy: Start" Dec 12 19:31:48.672193 kubelet[2538]: I1212 19:31:48.672135 2538 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 19:31:48.672193 kubelet[2538]: I1212 19:31:48.672154 2538 state_mem.go:35] "Initializing new in-memory state store" Dec 12 19:31:48.672000 audit[2568]: NETFILTER_CFG table=filter:49 family=2 entries=2 op=nft_register_chain pid=2568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:48.672000 audit[2568]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd980e8ac0 a2=0 a3=0 items=0 ppid=2538 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:48.672000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 19:31:48.680000 audit[2571]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:48.680000 audit[2571]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc1b2b2a00 a2=0 a3=0 items=0 ppid=2538 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:48.680000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 12 19:31:48.682000 audit[2572]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=2572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:48.682000 audit[2572]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc23f1060 a2=0 a3=0 items=0 ppid=2538 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:48.682000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 19:31:48.684645 kubelet[2538]: I1212 19:31:48.682644 2538 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 19:31:48.684645 kubelet[2538]: I1212 19:31:48.682685 2538 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 19:31:48.684645 kubelet[2538]: I1212 19:31:48.682708 2538 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 19:31:48.684645 kubelet[2538]: I1212 19:31:48.682717 2538 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 19:31:48.684645 kubelet[2538]: E1212 19:31:48.682762 2538 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 19:31:48.683000 audit[2573]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=2573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:48.683000 audit[2573]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc790deda0 a2=0 a3=0 items=0 ppid=2538 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:48.683000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 19:31:48.690534 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 19:31:48.691793 kubelet[2538]: E1212 19:31:48.691757 2538 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.101.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.101.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 19:31:48.691000 audit[2575]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:31:48.691000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe20aa8f50 a2=0 a3=0 items=0 ppid=2538 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:48.691000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 19:31:48.701776 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 19:31:48.706628 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 19:31:48.716239 kubelet[2538]: E1212 19:31:48.716202 2538 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 19:31:48.717613 kubelet[2538]: I1212 19:31:48.717589 2538 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 19:31:48.717692 kubelet[2538]: I1212 19:31:48.717624 2538 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 19:31:48.718094 kubelet[2538]: I1212 19:31:48.718066 2538 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 19:31:48.725943 kubelet[2538]: E1212 19:31:48.725918 2538 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 19:31:48.726063 kubelet[2538]: E1212 19:31:48.726008 2538 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-i3fa2.gb1.brightbox.com\" not found" Dec 12 19:31:48.803500 systemd[1]: Created slice kubepods-burstable-pod34e2771ebb8956695cb95ec167945b72.slice - libcontainer container kubepods-burstable-pod34e2771ebb8956695cb95ec167945b72.slice. Dec 12 19:31:48.814482 kubelet[2538]: E1212 19:31:48.814375 2538 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i3fa2.gb1.brightbox.com\" not found" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:48.817222 systemd[1]: Created slice kubepods-burstable-poda70d0999ef0a9317a7566053429119a7.slice - libcontainer container kubepods-burstable-poda70d0999ef0a9317a7566053429119a7.slice. Dec 12 19:31:48.819324 kubelet[2538]: E1212 19:31:48.819176 2538 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i3fa2.gb1.brightbox.com\" not found" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:48.820389 kubelet[2538]: I1212 19:31:48.820369 2538 kubelet_node_status.go:75] "Attempting to register node" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:48.821048 kubelet[2538]: E1212 19:31:48.821025 2538 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.101.34:6443/api/v1/nodes\": dial tcp 10.244.101.34:6443: connect: connection refused" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:48.829538 systemd[1]: Created slice kubepods-burstable-pod43599d93ca8b5def82330e5830269cd5.slice - libcontainer container kubepods-burstable-pod43599d93ca8b5def82330e5830269cd5.slice. Dec 12 19:31:48.831579 kubelet[2538]: E1212 19:31:48.831560 2538 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i3fa2.gb1.brightbox.com\" not found" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:48.832735 kubelet[2538]: I1212 19:31:48.832713 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34e2771ebb8956695cb95ec167945b72-ca-certs\") pod \"kube-apiserver-srv-i3fa2.gb1.brightbox.com\" (UID: \"34e2771ebb8956695cb95ec167945b72\") " pod="kube-system/kube-apiserver-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:48.832808 kubelet[2538]: I1212 19:31:48.832746 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34e2771ebb8956695cb95ec167945b72-k8s-certs\") pod \"kube-apiserver-srv-i3fa2.gb1.brightbox.com\" (UID: \"34e2771ebb8956695cb95ec167945b72\") " pod="kube-system/kube-apiserver-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:48.832808 kubelet[2538]: I1212 19:31:48.832766 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34e2771ebb8956695cb95ec167945b72-usr-share-ca-certificates\") pod \"kube-apiserver-srv-i3fa2.gb1.brightbox.com\" (UID: \"34e2771ebb8956695cb95ec167945b72\") " pod="kube-system/kube-apiserver-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:48.832808 kubelet[2538]: I1212 19:31:48.832784 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a70d0999ef0a9317a7566053429119a7-ca-certs\") pod \"kube-controller-manager-srv-i3fa2.gb1.brightbox.com\" (UID: \"a70d0999ef0a9317a7566053429119a7\") " pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:48.832808 kubelet[2538]: I1212 19:31:48.832804 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a70d0999ef0a9317a7566053429119a7-flexvolume-dir\") pod \"kube-controller-manager-srv-i3fa2.gb1.brightbox.com\" (UID: \"a70d0999ef0a9317a7566053429119a7\") " pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:48.832926 kubelet[2538]: I1212 19:31:48.832820 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a70d0999ef0a9317a7566053429119a7-k8s-certs\") pod \"kube-controller-manager-srv-i3fa2.gb1.brightbox.com\" (UID: \"a70d0999ef0a9317a7566053429119a7\") " pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:48.832926 kubelet[2538]: I1212 19:31:48.832849 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a70d0999ef0a9317a7566053429119a7-kubeconfig\") pod \"kube-controller-manager-srv-i3fa2.gb1.brightbox.com\" (UID: \"a70d0999ef0a9317a7566053429119a7\") " pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:48.832926 kubelet[2538]: I1212 19:31:48.832869 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43599d93ca8b5def82330e5830269cd5-kubeconfig\") pod \"kube-scheduler-srv-i3fa2.gb1.brightbox.com\" (UID: \"43599d93ca8b5def82330e5830269cd5\") " pod="kube-system/kube-scheduler-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:48.832926 kubelet[2538]: I1212 19:31:48.832890 2538 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a70d0999ef0a9317a7566053429119a7-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-i3fa2.gb1.brightbox.com\" (UID: \"a70d0999ef0a9317a7566053429119a7\") " pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:48.844356 kubelet[2538]: E1212 19:31:48.844312 2538 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.101.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-i3fa2.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.101.34:6443: connect: connection refused" interval="400ms" Dec 12 19:31:49.024839 kubelet[2538]: I1212 19:31:49.024794 2538 kubelet_node_status.go:75] "Attempting to register node" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:49.025327 kubelet[2538]: E1212 19:31:49.025263 2538 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.101.34:6443/api/v1/nodes\": dial tcp 10.244.101.34:6443: connect: connection refused" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:49.118377 containerd[1670]: time="2025-12-12T19:31:49.118083195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-i3fa2.gb1.brightbox.com,Uid:34e2771ebb8956695cb95ec167945b72,Namespace:kube-system,Attempt:0,}" Dec 12 19:31:49.122470 containerd[1670]: time="2025-12-12T19:31:49.121750042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-i3fa2.gb1.brightbox.com,Uid:a70d0999ef0a9317a7566053429119a7,Namespace:kube-system,Attempt:0,}" Dec 12 19:31:49.135054 containerd[1670]: time="2025-12-12T19:31:49.134988635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-i3fa2.gb1.brightbox.com,Uid:43599d93ca8b5def82330e5830269cd5,Namespace:kube-system,Attempt:0,}" Dec 12 19:31:49.246944 kubelet[2538]: E1212 19:31:49.246867 2538 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.101.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-i3fa2.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.101.34:6443: connect: connection refused" interval="800ms" Dec 12 19:31:49.256866 containerd[1670]: time="2025-12-12T19:31:49.256538594Z" level=info msg="connecting to shim 8289990698c7ef65034f6404a9a82fd8575e951282e2c3cd41699a41542f2092" address="unix:///run/containerd/s/0f9607b337b9678dac903c82c4447141006f753706a76d3151243945a8f2b7be" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:31:49.258510 containerd[1670]: time="2025-12-12T19:31:49.257999811Z" level=info msg="connecting to shim cba191c47f1d73d5b3f861e25730b8a1887c374b1310bd396d2e8bf447c42373" address="unix:///run/containerd/s/01363cb4efd78eca17697597a80ef2dd2d1fcdd66700122c970cbae8a5a47cf7" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:31:49.261559 containerd[1670]: time="2025-12-12T19:31:49.261513637Z" level=info msg="connecting to shim 2fd41a425e5e8ca814d9786be43aea7fdd56430eb853824d9a21e0f36a56720f" address="unix:///run/containerd/s/39ed135c6e65a641d05bb42861b568799272ab2ff66900a87f1ac0770a659080" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:31:49.363723 systemd[1]: Started cri-containerd-2fd41a425e5e8ca814d9786be43aea7fdd56430eb853824d9a21e0f36a56720f.scope - libcontainer container 2fd41a425e5e8ca814d9786be43aea7fdd56430eb853824d9a21e0f36a56720f. Dec 12 19:31:49.365374 systemd[1]: Started cri-containerd-cba191c47f1d73d5b3f861e25730b8a1887c374b1310bd396d2e8bf447c42373.scope - libcontainer container cba191c47f1d73d5b3f861e25730b8a1887c374b1310bd396d2e8bf447c42373. Dec 12 19:31:49.381160 systemd[1]: Started cri-containerd-8289990698c7ef65034f6404a9a82fd8575e951282e2c3cd41699a41542f2092.scope - libcontainer container 8289990698c7ef65034f6404a9a82fd8575e951282e2c3cd41699a41542f2092. Dec 12 19:31:49.406000 audit: BPF prog-id=86 op=LOAD Dec 12 19:31:49.410000 audit: BPF prog-id=87 op=LOAD Dec 12 19:31:49.409000 audit: BPF prog-id=88 op=LOAD Dec 12 19:31:49.409000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2604 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613139316334376631643733643562336638363165323537333062 Dec 12 19:31:49.410000 audit: BPF prog-id=88 op=UNLOAD Dec 12 19:31:49.410000 audit[2638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2604 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613139316334376631643733643562336638363165323537333062 Dec 12 19:31:49.410000 audit: BPF prog-id=89 op=LOAD Dec 12 19:31:49.410000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2604 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613139316334376631643733643562336638363165323537333062 Dec 12 19:31:49.410000 audit: BPF prog-id=90 op=LOAD Dec 12 19:31:49.410000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2604 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613139316334376631643733643562336638363165323537333062 Dec 12 19:31:49.410000 audit: BPF prog-id=90 op=UNLOAD Dec 12 19:31:49.410000 audit[2638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2604 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613139316334376631643733643562336638363165323537333062 Dec 12 19:31:49.410000 audit: BPF prog-id=89 op=UNLOAD Dec 12 19:31:49.410000 audit[2638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2604 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613139316334376631643733643562336638363165323537333062 Dec 12 19:31:49.410000 audit: BPF prog-id=91 op=LOAD Dec 12 19:31:49.410000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2604 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362613139316334376631643733643562336638363165323537333062 Dec 12 19:31:49.411000 audit: BPF prog-id=92 op=LOAD Dec 12 19:31:49.411000 audit[2640]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2606 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643431613432356535653863613831346439373836626534336165 Dec 12 19:31:49.411000 audit: BPF prog-id=92 op=UNLOAD Dec 12 19:31:49.411000 audit[2640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2606 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643431613432356535653863613831346439373836626534336165 Dec 12 19:31:49.411000 audit: BPF prog-id=93 op=LOAD Dec 12 19:31:49.411000 audit[2640]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2606 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643431613432356535653863613831346439373836626534336165 Dec 12 19:31:49.411000 audit: BPF prog-id=94 op=LOAD Dec 12 19:31:49.411000 audit[2640]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2606 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643431613432356535653863613831346439373836626534336165 Dec 12 19:31:49.411000 audit: BPF prog-id=94 op=UNLOAD Dec 12 19:31:49.411000 audit[2640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2606 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643431613432356535653863613831346439373836626534336165 Dec 12 19:31:49.411000 audit: BPF prog-id=93 op=UNLOAD Dec 12 19:31:49.411000 audit[2640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2606 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643431613432356535653863613831346439373836626534336165 Dec 12 19:31:49.411000 audit: BPF prog-id=95 op=LOAD Dec 12 19:31:49.411000 audit[2640]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2606 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643431613432356535653863613831346439373836626534336165 Dec 12 19:31:49.417000 audit: BPF prog-id=96 op=LOAD Dec 12 19:31:49.418000 audit: BPF prog-id=97 op=LOAD Dec 12 19:31:49.418000 audit[2642]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174238 a2=98 a3=0 items=0 ppid=2603 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832383939393036393863376566363530333466363430346139613832 Dec 12 19:31:49.418000 audit: BPF prog-id=97 op=UNLOAD Dec 12 19:31:49.418000 audit[2642]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2603 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832383939393036393863376566363530333466363430346139613832 Dec 12 19:31:49.418000 audit: BPF prog-id=98 op=LOAD Dec 12 19:31:49.418000 audit[2642]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174488 a2=98 a3=0 items=0 ppid=2603 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832383939393036393863376566363530333466363430346139613832 Dec 12 19:31:49.418000 audit: BPF prog-id=99 op=LOAD Dec 12 19:31:49.418000 audit[2642]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000174218 a2=98 a3=0 items=0 ppid=2603 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832383939393036393863376566363530333466363430346139613832 Dec 12 19:31:49.418000 audit: BPF prog-id=99 op=UNLOAD Dec 12 19:31:49.418000 audit[2642]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2603 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832383939393036393863376566363530333466363430346139613832 Dec 12 19:31:49.418000 audit: BPF prog-id=98 op=UNLOAD Dec 12 19:31:49.418000 audit[2642]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2603 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832383939393036393863376566363530333466363430346139613832 Dec 12 19:31:49.418000 audit: BPF prog-id=100 op=LOAD Dec 12 19:31:49.418000 audit[2642]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001746e8 a2=98 a3=0 items=0 ppid=2603 pid=2642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832383939393036393863376566363530333466363430346139613832 Dec 12 19:31:49.431029 kubelet[2538]: I1212 19:31:49.431003 2538 kubelet_node_status.go:75] "Attempting to register node" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:49.431600 kubelet[2538]: E1212 19:31:49.431572 2538 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.101.34:6443/api/v1/nodes\": dial tcp 10.244.101.34:6443: connect: connection refused" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:49.481109 containerd[1670]: time="2025-12-12T19:31:49.481071305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-i3fa2.gb1.brightbox.com,Uid:a70d0999ef0a9317a7566053429119a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"cba191c47f1d73d5b3f861e25730b8a1887c374b1310bd396d2e8bf447c42373\"" Dec 12 19:31:49.488198 containerd[1670]: time="2025-12-12T19:31:49.488126046Z" level=info msg="CreateContainer within sandbox \"cba191c47f1d73d5b3f861e25730b8a1887c374b1310bd396d2e8bf447c42373\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 19:31:49.508416 containerd[1670]: time="2025-12-12T19:31:49.508379668Z" level=info msg="Container 6ae2ec1fbe8f38cff8817c33e5a1fe77897e61ca2a0e26c648682441daa581b3: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:31:49.516649 containerd[1670]: time="2025-12-12T19:31:49.516582747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-i3fa2.gb1.brightbox.com,Uid:43599d93ca8b5def82330e5830269cd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8289990698c7ef65034f6404a9a82fd8575e951282e2c3cd41699a41542f2092\"" Dec 12 19:31:49.518730 containerd[1670]: time="2025-12-12T19:31:49.518665453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-i3fa2.gb1.brightbox.com,Uid:34e2771ebb8956695cb95ec167945b72,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fd41a425e5e8ca814d9786be43aea7fdd56430eb853824d9a21e0f36a56720f\"" Dec 12 19:31:49.521688 containerd[1670]: time="2025-12-12T19:31:49.521652466Z" level=info msg="CreateContainer within sandbox \"8289990698c7ef65034f6404a9a82fd8575e951282e2c3cd41699a41542f2092\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 19:31:49.523491 containerd[1670]: time="2025-12-12T19:31:49.523342940Z" level=info msg="CreateContainer within sandbox \"2fd41a425e5e8ca814d9786be43aea7fdd56430eb853824d9a21e0f36a56720f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 19:31:49.524875 containerd[1670]: time="2025-12-12T19:31:49.524734647Z" level=info msg="CreateContainer within sandbox \"cba191c47f1d73d5b3f861e25730b8a1887c374b1310bd396d2e8bf447c42373\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6ae2ec1fbe8f38cff8817c33e5a1fe77897e61ca2a0e26c648682441daa581b3\"" Dec 12 19:31:49.526503 containerd[1670]: time="2025-12-12T19:31:49.526417559Z" level=info msg="StartContainer for \"6ae2ec1fbe8f38cff8817c33e5a1fe77897e61ca2a0e26c648682441daa581b3\"" Dec 12 19:31:49.528083 containerd[1670]: time="2025-12-12T19:31:49.528040922Z" level=info msg="connecting to shim 6ae2ec1fbe8f38cff8817c33e5a1fe77897e61ca2a0e26c648682441daa581b3" address="unix:///run/containerd/s/01363cb4efd78eca17697597a80ef2dd2d1fcdd66700122c970cbae8a5a47cf7" protocol=ttrpc version=3 Dec 12 19:31:49.536346 containerd[1670]: time="2025-12-12T19:31:49.536096998Z" level=info msg="Container 41fc42adf97a3e9d8c4c8fb5b8c9b455de6405728845a8dfb1c5628c2700fab5: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:31:49.536475 containerd[1670]: time="2025-12-12T19:31:49.536428412Z" level=info msg="Container 36974e8a8ccc56bc3d609a522f1c25f694586103008221d07f899f57049e7808: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:31:49.541276 containerd[1670]: time="2025-12-12T19:31:49.541231255Z" level=info msg="CreateContainer within sandbox \"8289990698c7ef65034f6404a9a82fd8575e951282e2c3cd41699a41542f2092\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36974e8a8ccc56bc3d609a522f1c25f694586103008221d07f899f57049e7808\"" Dec 12 19:31:49.542124 containerd[1670]: time="2025-12-12T19:31:49.542088368Z" level=info msg="StartContainer for \"36974e8a8ccc56bc3d609a522f1c25f694586103008221d07f899f57049e7808\"" Dec 12 19:31:49.544062 containerd[1670]: time="2025-12-12T19:31:49.543998205Z" level=info msg="connecting to shim 36974e8a8ccc56bc3d609a522f1c25f694586103008221d07f899f57049e7808" address="unix:///run/containerd/s/0f9607b337b9678dac903c82c4447141006f753706a76d3151243945a8f2b7be" protocol=ttrpc version=3 Dec 12 19:31:49.547980 containerd[1670]: time="2025-12-12T19:31:49.547936172Z" level=info msg="CreateContainer within sandbox \"2fd41a425e5e8ca814d9786be43aea7fdd56430eb853824d9a21e0f36a56720f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"41fc42adf97a3e9d8c4c8fb5b8c9b455de6405728845a8dfb1c5628c2700fab5\"" Dec 12 19:31:49.549158 containerd[1670]: time="2025-12-12T19:31:49.549131518Z" level=info msg="StartContainer for \"41fc42adf97a3e9d8c4c8fb5b8c9b455de6405728845a8dfb1c5628c2700fab5\"" Dec 12 19:31:49.556964 containerd[1670]: time="2025-12-12T19:31:49.556830992Z" level=info msg="connecting to shim 41fc42adf97a3e9d8c4c8fb5b8c9b455de6405728845a8dfb1c5628c2700fab5" address="unix:///run/containerd/s/39ed135c6e65a641d05bb42861b568799272ab2ff66900a87f1ac0770a659080" protocol=ttrpc version=3 Dec 12 19:31:49.558936 systemd[1]: Started cri-containerd-6ae2ec1fbe8f38cff8817c33e5a1fe77897e61ca2a0e26c648682441daa581b3.scope - libcontainer container 6ae2ec1fbe8f38cff8817c33e5a1fe77897e61ca2a0e26c648682441daa581b3. Dec 12 19:31:49.576993 kubelet[2538]: E1212 19:31:49.576904 2538 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.101.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-i3fa2.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.101.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 19:31:49.584000 audit: BPF prog-id=101 op=LOAD Dec 12 19:31:49.585000 audit: BPF prog-id=102 op=LOAD Dec 12 19:31:49.585000 audit[2725]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2604 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653265633166626538663338636666383831376333336535613166 Dec 12 19:31:49.585000 audit: BPF prog-id=102 op=UNLOAD Dec 12 19:31:49.585000 audit[2725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2604 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653265633166626538663338636666383831376333336535613166 Dec 12 19:31:49.587000 audit: BPF prog-id=103 op=LOAD Dec 12 19:31:49.587000 audit[2725]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2604 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653265633166626538663338636666383831376333336535613166 Dec 12 19:31:49.589000 audit: BPF prog-id=104 op=LOAD Dec 12 19:31:49.589000 audit[2725]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2604 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.591790 systemd[1]: Started cri-containerd-41fc42adf97a3e9d8c4c8fb5b8c9b455de6405728845a8dfb1c5628c2700fab5.scope - libcontainer container 41fc42adf97a3e9d8c4c8fb5b8c9b455de6405728845a8dfb1c5628c2700fab5. Dec 12 19:31:49.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653265633166626538663338636666383831376333336535613166 Dec 12 19:31:49.591000 audit: BPF prog-id=104 op=UNLOAD Dec 12 19:31:49.591000 audit[2725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2604 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.591000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653265633166626538663338636666383831376333336535613166 Dec 12 19:31:49.593000 audit: BPF prog-id=103 op=UNLOAD Dec 12 19:31:49.593000 audit[2725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2604 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653265633166626538663338636666383831376333336535613166 Dec 12 19:31:49.596000 audit: BPF prog-id=105 op=LOAD Dec 12 19:31:49.596000 audit[2725]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2604 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653265633166626538663338636666383831376333336535613166 Dec 12 19:31:49.602791 systemd[1]: Started cri-containerd-36974e8a8ccc56bc3d609a522f1c25f694586103008221d07f899f57049e7808.scope - libcontainer container 36974e8a8ccc56bc3d609a522f1c25f694586103008221d07f899f57049e7808. Dec 12 19:31:49.623000 audit: BPF prog-id=106 op=LOAD Dec 12 19:31:49.624000 audit: BPF prog-id=107 op=LOAD Dec 12 19:31:49.624000 audit[2739]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2603 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393734653861386363633536626333643630396135323266316332 Dec 12 19:31:49.624000 audit: BPF prog-id=107 op=UNLOAD Dec 12 19:31:49.624000 audit[2739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2603 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393734653861386363633536626333643630396135323266316332 Dec 12 19:31:49.624000 audit: BPF prog-id=108 op=LOAD Dec 12 19:31:49.624000 audit[2739]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2603 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393734653861386363633536626333643630396135323266316332 Dec 12 19:31:49.624000 audit: BPF prog-id=109 op=LOAD Dec 12 19:31:49.624000 audit[2739]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2603 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393734653861386363633536626333643630396135323266316332 Dec 12 19:31:49.624000 audit: BPF prog-id=109 op=UNLOAD Dec 12 19:31:49.624000 audit[2739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2603 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393734653861386363633536626333643630396135323266316332 Dec 12 19:31:49.624000 audit: BPF prog-id=108 op=UNLOAD Dec 12 19:31:49.624000 audit[2739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2603 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393734653861386363633536626333643630396135323266316332 Dec 12 19:31:49.624000 audit: BPF prog-id=110 op=LOAD Dec 12 19:31:49.624000 audit[2739]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2603 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393734653861386363633536626333643630396135323266316332 Dec 12 19:31:49.626000 audit: BPF prog-id=111 op=LOAD Dec 12 19:31:49.626000 audit: BPF prog-id=112 op=LOAD Dec 12 19:31:49.626000 audit[2742]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2606 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431666334326164663937613365396438633463386662356238633962 Dec 12 19:31:49.627000 audit: BPF prog-id=112 op=UNLOAD Dec 12 19:31:49.627000 audit[2742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2606 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431666334326164663937613365396438633463386662356238633962 Dec 12 19:31:49.628000 audit: BPF prog-id=113 op=LOAD Dec 12 19:31:49.628000 audit[2742]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2606 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431666334326164663937613365396438633463386662356238633962 Dec 12 19:31:49.628000 audit: BPF prog-id=114 op=LOAD Dec 12 19:31:49.628000 audit[2742]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2606 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431666334326164663937613365396438633463386662356238633962 Dec 12 19:31:49.629000 audit: BPF prog-id=114 op=UNLOAD Dec 12 19:31:49.629000 audit[2742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2606 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431666334326164663937613365396438633463386662356238633962 Dec 12 19:31:49.629000 audit: BPF prog-id=113 op=UNLOAD Dec 12 19:31:49.629000 audit[2742]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2606 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431666334326164663937613365396438633463386662356238633962 Dec 12 19:31:49.629000 audit: BPF prog-id=115 op=LOAD Dec 12 19:31:49.629000 audit[2742]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2606 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:49.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431666334326164663937613365396438633463386662356238633962 Dec 12 19:31:49.667463 containerd[1670]: time="2025-12-12T19:31:49.667070822Z" level=info msg="StartContainer for \"6ae2ec1fbe8f38cff8817c33e5a1fe77897e61ca2a0e26c648682441daa581b3\" returns successfully" Dec 12 19:31:49.709820 kubelet[2538]: E1212 19:31:49.709785 2538 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i3fa2.gb1.brightbox.com\" not found" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:49.727945 containerd[1670]: time="2025-12-12T19:31:49.727895661Z" level=info msg="StartContainer for \"41fc42adf97a3e9d8c4c8fb5b8c9b455de6405728845a8dfb1c5628c2700fab5\" returns successfully" Dec 12 19:31:49.746021 containerd[1670]: time="2025-12-12T19:31:49.745987607Z" level=info msg="StartContainer for \"36974e8a8ccc56bc3d609a522f1c25f694586103008221d07f899f57049e7808\" returns successfully" Dec 12 19:31:49.844730 kubelet[2538]: E1212 19:31:49.844681 2538 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.101.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.101.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 19:31:50.234691 kubelet[2538]: I1212 19:31:50.234659 2538 kubelet_node_status.go:75] "Attempting to register node" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:50.731037 kubelet[2538]: E1212 19:31:50.730989 2538 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i3fa2.gb1.brightbox.com\" not found" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:50.731726 kubelet[2538]: E1212 19:31:50.731563 2538 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i3fa2.gb1.brightbox.com\" not found" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:51.276278 kubelet[2538]: E1212 19:31:51.276238 2538 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i3fa2.gb1.brightbox.com\" not found" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:51.732721 kubelet[2538]: E1212 19:31:51.732574 2538 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i3fa2.gb1.brightbox.com\" not found" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:51.734237 kubelet[2538]: E1212 19:31:51.734217 2538 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i3fa2.gb1.brightbox.com\" not found" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:51.819760 kubelet[2538]: E1212 19:31:51.819712 2538 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-i3fa2.gb1.brightbox.com\" not found" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:51.936629 kubelet[2538]: I1212 19:31:51.936584 2538 kubelet_node_status.go:78] "Successfully registered node" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:52.033879 kubelet[2538]: I1212 19:31:52.033270 2538 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:52.043847 kubelet[2538]: E1212 19:31:52.043782 2538 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-i3fa2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:52.043847 kubelet[2538]: I1212 19:31:52.043835 2538 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:52.046252 kubelet[2538]: E1212 19:31:52.046195 2538 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-i3fa2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:52.046376 kubelet[2538]: I1212 19:31:52.046258 2538 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:52.048674 kubelet[2538]: E1212 19:31:52.048633 2538 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-i3fa2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:52.605648 kubelet[2538]: I1212 19:31:52.605277 2538 apiserver.go:52] "Watching apiserver" Dec 12 19:31:52.632890 kubelet[2538]: I1212 19:31:52.632790 2538 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 19:31:52.736264 kubelet[2538]: I1212 19:31:52.735381 2538 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:52.736264 kubelet[2538]: I1212 19:31:52.735821 2538 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:52.746601 kubelet[2538]: I1212 19:31:52.744706 2538 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 19:31:52.747257 kubelet[2538]: I1212 19:31:52.747231 2538 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 19:31:54.193203 systemd[1]: Reload requested from client PID 2826 ('systemctl') (unit session-9.scope)... Dec 12 19:31:54.193222 systemd[1]: Reloading... Dec 12 19:31:54.354667 zram_generator::config[2871]: No configuration found. Dec 12 19:31:54.668980 systemd[1]: Reloading finished in 475 ms. Dec 12 19:31:54.712857 kubelet[2538]: I1212 19:31:54.712734 2538 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 19:31:54.712782 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:31:54.729347 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 19:31:54.734373 kernel: kauditd_printk_skb: 204 callbacks suppressed Dec 12 19:31:54.735057 kernel: audit: type=1131 audit(1765567914.728:397): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:54.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:54.730098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:31:54.730199 systemd[1]: kubelet.service: Consumed 968ms CPU time, 130.2M memory peak. Dec 12 19:31:54.740813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 19:31:54.740000 audit: BPF prog-id=116 op=LOAD Dec 12 19:31:54.745663 kernel: audit: type=1334 audit(1765567914.740:398): prog-id=116 op=LOAD Dec 12 19:31:54.745755 kernel: audit: type=1334 audit(1765567914.740:399): prog-id=66 op=UNLOAD Dec 12 19:31:54.745788 kernel: audit: type=1334 audit(1765567914.741:400): prog-id=117 op=LOAD Dec 12 19:31:54.745818 kernel: audit: type=1334 audit(1765567914.741:401): prog-id=118 op=LOAD Dec 12 19:31:54.740000 audit: BPF prog-id=66 op=UNLOAD Dec 12 19:31:54.741000 audit: BPF prog-id=117 op=LOAD Dec 12 19:31:54.741000 audit: BPF prog-id=118 op=LOAD Dec 12 19:31:54.741000 audit: BPF prog-id=80 op=UNLOAD Dec 12 19:31:54.750931 kernel: audit: type=1334 audit(1765567914.741:402): prog-id=80 op=UNLOAD Dec 12 19:31:54.751007 kernel: audit: type=1334 audit(1765567914.741:403): prog-id=81 op=UNLOAD Dec 12 19:31:54.751033 kernel: audit: type=1334 audit(1765567914.742:404): prog-id=119 op=LOAD Dec 12 19:31:54.741000 audit: BPF prog-id=81 op=UNLOAD Dec 12 19:31:54.742000 audit: BPF prog-id=119 op=LOAD Dec 12 19:31:54.753010 kernel: audit: type=1334 audit(1765567914.742:405): prog-id=74 op=UNLOAD Dec 12 19:31:54.753072 kernel: audit: type=1334 audit(1765567914.743:406): prog-id=120 op=LOAD Dec 12 19:31:54.742000 audit: BPF prog-id=74 op=UNLOAD Dec 12 19:31:54.743000 audit: BPF prog-id=120 op=LOAD Dec 12 19:31:54.743000 audit: BPF prog-id=121 op=LOAD Dec 12 19:31:54.743000 audit: BPF prog-id=75 op=UNLOAD Dec 12 19:31:54.743000 audit: BPF prog-id=76 op=UNLOAD Dec 12 19:31:54.743000 audit: BPF prog-id=122 op=LOAD Dec 12 19:31:54.743000 audit: BPF prog-id=71 op=UNLOAD Dec 12 19:31:54.743000 audit: BPF prog-id=123 op=LOAD Dec 12 19:31:54.743000 audit: BPF prog-id=124 op=LOAD Dec 12 19:31:54.743000 audit: BPF prog-id=72 op=UNLOAD Dec 12 19:31:54.743000 audit: BPF prog-id=73 op=UNLOAD Dec 12 19:31:54.744000 audit: BPF prog-id=125 op=LOAD Dec 12 19:31:54.744000 audit: BPF prog-id=85 op=UNLOAD Dec 12 19:31:54.745000 audit: BPF prog-id=126 op=LOAD Dec 12 19:31:54.745000 audit: BPF prog-id=70 op=UNLOAD Dec 12 19:31:54.746000 audit: BPF prog-id=127 op=LOAD Dec 12 19:31:54.746000 audit: BPF prog-id=77 op=UNLOAD Dec 12 19:31:54.746000 audit: BPF prog-id=128 op=LOAD Dec 12 19:31:54.746000 audit: BPF prog-id=129 op=LOAD Dec 12 19:31:54.746000 audit: BPF prog-id=78 op=UNLOAD Dec 12 19:31:54.746000 audit: BPF prog-id=79 op=UNLOAD Dec 12 19:31:54.747000 audit: BPF prog-id=130 op=LOAD Dec 12 19:31:54.747000 audit: BPF prog-id=65 op=UNLOAD Dec 12 19:31:54.748000 audit: BPF prog-id=131 op=LOAD Dec 12 19:31:54.748000 audit: BPF prog-id=67 op=UNLOAD Dec 12 19:31:54.748000 audit: BPF prog-id=132 op=LOAD Dec 12 19:31:54.748000 audit: BPF prog-id=133 op=LOAD Dec 12 19:31:54.748000 audit: BPF prog-id=68 op=UNLOAD Dec 12 19:31:54.748000 audit: BPF prog-id=69 op=UNLOAD Dec 12 19:31:54.748000 audit: BPF prog-id=134 op=LOAD Dec 12 19:31:54.748000 audit: BPF prog-id=82 op=UNLOAD Dec 12 19:31:54.749000 audit: BPF prog-id=135 op=LOAD Dec 12 19:31:54.749000 audit: BPF prog-id=136 op=LOAD Dec 12 19:31:54.749000 audit: BPF prog-id=83 op=UNLOAD Dec 12 19:31:54.749000 audit: BPF prog-id=84 op=UNLOAD Dec 12 19:31:54.954692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 19:31:54.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:31:54.967853 (kubelet)[2938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 19:31:55.051253 kubelet[2938]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 19:31:55.051253 kubelet[2938]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 19:31:55.051253 kubelet[2938]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 19:31:55.051774 kubelet[2938]: I1212 19:31:55.051290 2938 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 19:31:55.071483 kubelet[2938]: I1212 19:31:55.070841 2938 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 19:31:55.071483 kubelet[2938]: I1212 19:31:55.070887 2938 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 19:31:55.071483 kubelet[2938]: I1212 19:31:55.071232 2938 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 19:31:55.076279 kubelet[2938]: I1212 19:31:55.076232 2938 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 19:31:55.092513 kubelet[2938]: I1212 19:31:55.092098 2938 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 19:31:55.121599 kubelet[2938]: I1212 19:31:55.121541 2938 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 19:31:55.134303 kubelet[2938]: I1212 19:31:55.134203 2938 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 19:31:55.135634 kubelet[2938]: I1212 19:31:55.135541 2938 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 19:31:55.135920 kubelet[2938]: I1212 19:31:55.135577 2938 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-i3fa2.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 19:31:55.138332 kubelet[2938]: I1212 19:31:55.138298 2938 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 19:31:55.138332 kubelet[2938]: I1212 19:31:55.138328 2938 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 19:31:55.138579 kubelet[2938]: I1212 19:31:55.138427 2938 state_mem.go:36] "Initialized new in-memory state store" Dec 12 19:31:55.138666 kubelet[2938]: I1212 19:31:55.138650 2938 kubelet.go:480] "Attempting to sync node with API server" Dec 12 19:31:55.138708 kubelet[2938]: I1212 19:31:55.138666 2938 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 19:31:55.144768 kubelet[2938]: I1212 19:31:55.140886 2938 kubelet.go:386] "Adding apiserver pod source" Dec 12 19:31:55.144768 kubelet[2938]: I1212 19:31:55.141307 2938 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 19:31:55.157970 kubelet[2938]: I1212 19:31:55.157361 2938 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 19:31:55.157970 kubelet[2938]: I1212 19:31:55.157936 2938 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 19:31:55.173453 kubelet[2938]: I1212 19:31:55.172021 2938 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 19:31:55.173453 kubelet[2938]: I1212 19:31:55.172088 2938 server.go:1289] "Started kubelet" Dec 12 19:31:55.185198 kubelet[2938]: I1212 19:31:55.184999 2938 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 19:31:55.193680 kubelet[2938]: I1212 19:31:55.193426 2938 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 19:31:55.196829 kubelet[2938]: I1212 19:31:55.195798 2938 server.go:317] "Adding debug handlers to kubelet server" Dec 12 19:31:55.197687 kubelet[2938]: I1212 19:31:55.197253 2938 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 19:31:55.205586 kubelet[2938]: I1212 19:31:55.205506 2938 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 19:31:55.205927 kubelet[2938]: I1212 19:31:55.205732 2938 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 19:31:55.207165 kubelet[2938]: I1212 19:31:55.206210 2938 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 19:31:55.207285 kubelet[2938]: I1212 19:31:55.206351 2938 reconciler.go:26] "Reconciler: start to sync state" Dec 12 19:31:55.208890 kubelet[2938]: I1212 19:31:55.208490 2938 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 19:31:55.211108 kubelet[2938]: I1212 19:31:55.209595 2938 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 19:31:55.211108 kubelet[2938]: I1212 19:31:55.210507 2938 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 19:31:55.211108 kubelet[2938]: I1212 19:31:55.210541 2938 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 19:31:55.211108 kubelet[2938]: I1212 19:31:55.210552 2938 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 19:31:55.211108 kubelet[2938]: E1212 19:31:55.210607 2938 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 19:31:55.213850 kubelet[2938]: I1212 19:31:55.213809 2938 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 19:31:55.221764 update_engine[1643]: I20251212 19:31:55.219565 1643 update_attempter.cc:509] Updating boot flags... Dec 12 19:31:55.222585 kubelet[2938]: I1212 19:31:55.222298 2938 factory.go:223] Registration of the systemd container factory successfully Dec 12 19:31:55.222585 kubelet[2938]: I1212 19:31:55.222579 2938 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 19:31:55.227586 kubelet[2938]: E1212 19:31:55.227561 2938 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 19:31:55.229574 kubelet[2938]: I1212 19:31:55.228069 2938 factory.go:223] Registration of the containerd container factory successfully Dec 12 19:31:55.318090 kubelet[2938]: E1212 19:31:55.318059 2938 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 19:31:55.403256 kubelet[2938]: I1212 19:31:55.403067 2938 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 19:31:55.403256 kubelet[2938]: I1212 19:31:55.403086 2938 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 19:31:55.403256 kubelet[2938]: I1212 19:31:55.403109 2938 state_mem.go:36] "Initialized new in-memory state store" Dec 12 19:31:55.403256 kubelet[2938]: I1212 19:31:55.403260 2938 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 19:31:55.403496 kubelet[2938]: I1212 19:31:55.403270 2938 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 19:31:55.403496 kubelet[2938]: I1212 19:31:55.403296 2938 policy_none.go:49] "None policy: Start" Dec 12 19:31:55.403496 kubelet[2938]: I1212 19:31:55.403309 2938 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 19:31:55.403496 kubelet[2938]: I1212 19:31:55.403319 2938 state_mem.go:35] "Initializing new in-memory state store" Dec 12 19:31:55.403496 kubelet[2938]: I1212 19:31:55.403409 2938 state_mem.go:75] "Updated machine memory state" Dec 12 19:31:55.452109 kubelet[2938]: E1212 19:31:55.452074 2938 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 19:31:55.452276 kubelet[2938]: I1212 19:31:55.452263 2938 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 19:31:55.452323 kubelet[2938]: I1212 19:31:55.452279 2938 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 19:31:55.464545 kubelet[2938]: E1212 19:31:55.463368 2938 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 19:31:55.464545 kubelet[2938]: I1212 19:31:55.463873 2938 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 19:31:55.523464 kubelet[2938]: I1212 19:31:55.520412 2938 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.526900 kubelet[2938]: I1212 19:31:55.524685 2938 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.526900 kubelet[2938]: I1212 19:31:55.525978 2938 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.545000 kubelet[2938]: I1212 19:31:55.540988 2938 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 19:31:55.548498 kubelet[2938]: I1212 19:31:55.548459 2938 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 19:31:55.549633 kubelet[2938]: E1212 19:31:55.549579 2938 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-i3fa2.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.549833 kubelet[2938]: I1212 19:31:55.549804 2938 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 19:31:55.549889 kubelet[2938]: E1212 19:31:55.549875 2938 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-i3fa2.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.610851 kubelet[2938]: I1212 19:31:55.610228 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43599d93ca8b5def82330e5830269cd5-kubeconfig\") pod \"kube-scheduler-srv-i3fa2.gb1.brightbox.com\" (UID: \"43599d93ca8b5def82330e5830269cd5\") " pod="kube-system/kube-scheduler-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.610851 kubelet[2938]: I1212 19:31:55.610301 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34e2771ebb8956695cb95ec167945b72-ca-certs\") pod \"kube-apiserver-srv-i3fa2.gb1.brightbox.com\" (UID: \"34e2771ebb8956695cb95ec167945b72\") " pod="kube-system/kube-apiserver-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.610851 kubelet[2938]: I1212 19:31:55.610335 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34e2771ebb8956695cb95ec167945b72-usr-share-ca-certificates\") pod \"kube-apiserver-srv-i3fa2.gb1.brightbox.com\" (UID: \"34e2771ebb8956695cb95ec167945b72\") " pod="kube-system/kube-apiserver-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.610851 kubelet[2938]: I1212 19:31:55.610379 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a70d0999ef0a9317a7566053429119a7-kubeconfig\") pod \"kube-controller-manager-srv-i3fa2.gb1.brightbox.com\" (UID: \"a70d0999ef0a9317a7566053429119a7\") " pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.610851 kubelet[2938]: I1212 19:31:55.610397 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34e2771ebb8956695cb95ec167945b72-k8s-certs\") pod \"kube-apiserver-srv-i3fa2.gb1.brightbox.com\" (UID: \"34e2771ebb8956695cb95ec167945b72\") " pod="kube-system/kube-apiserver-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.611350 kubelet[2938]: I1212 19:31:55.610413 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a70d0999ef0a9317a7566053429119a7-ca-certs\") pod \"kube-controller-manager-srv-i3fa2.gb1.brightbox.com\" (UID: \"a70d0999ef0a9317a7566053429119a7\") " pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.611350 kubelet[2938]: I1212 19:31:55.610429 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a70d0999ef0a9317a7566053429119a7-flexvolume-dir\") pod \"kube-controller-manager-srv-i3fa2.gb1.brightbox.com\" (UID: \"a70d0999ef0a9317a7566053429119a7\") " pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.611504 kubelet[2938]: I1212 19:31:55.611480 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a70d0999ef0a9317a7566053429119a7-k8s-certs\") pod \"kube-controller-manager-srv-i3fa2.gb1.brightbox.com\" (UID: \"a70d0999ef0a9317a7566053429119a7\") " pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.611581 kubelet[2938]: I1212 19:31:55.611570 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a70d0999ef0a9317a7566053429119a7-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-i3fa2.gb1.brightbox.com\" (UID: \"a70d0999ef0a9317a7566053429119a7\") " pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.612057 kubelet[2938]: I1212 19:31:55.612041 2938 kubelet_node_status.go:75] "Attempting to register node" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.623343 kubelet[2938]: I1212 19:31:55.623147 2938 kubelet_node_status.go:124] "Node was previously registered" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:55.624031 kubelet[2938]: I1212 19:31:55.623942 2938 kubelet_node_status.go:78] "Successfully registered node" node="srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:56.154305 kubelet[2938]: I1212 19:31:56.153141 2938 apiserver.go:52] "Watching apiserver" Dec 12 19:31:56.206261 kubelet[2938]: I1212 19:31:56.206198 2938 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 19:31:56.298866 kubelet[2938]: I1212 19:31:56.298827 2938 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:56.310079 kubelet[2938]: I1212 19:31:56.310047 2938 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 19:31:56.310697 kubelet[2938]: E1212 19:31:56.310671 2938 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-i3fa2.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" Dec 12 19:31:56.351492 kubelet[2938]: I1212 19:31:56.351291 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-i3fa2.gb1.brightbox.com" podStartSLOduration=4.35126966 podStartE2EDuration="4.35126966s" podCreationTimestamp="2025-12-12 19:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 19:31:56.33782313 +0000 UTC m=+1.355975029" watchObservedRunningTime="2025-12-12 19:31:56.35126966 +0000 UTC m=+1.369421559" Dec 12 19:31:56.360844 kubelet[2938]: I1212 19:31:56.360515 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-i3fa2.gb1.brightbox.com" podStartSLOduration=1.360496079 podStartE2EDuration="1.360496079s" podCreationTimestamp="2025-12-12 19:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 19:31:56.351645034 +0000 UTC m=+1.369796934" watchObservedRunningTime="2025-12-12 19:31:56.360496079 +0000 UTC m=+1.378647978" Dec 12 19:31:56.361408 kubelet[2938]: I1212 19:31:56.361169 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-i3fa2.gb1.brightbox.com" podStartSLOduration=4.36097429 podStartE2EDuration="4.36097429s" podCreationTimestamp="2025-12-12 19:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 19:31:56.360323395 +0000 UTC m=+1.378475311" watchObservedRunningTime="2025-12-12 19:31:56.36097429 +0000 UTC m=+1.379126212" Dec 12 19:31:59.002460 kubelet[2938]: I1212 19:31:59.002392 2938 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 19:31:59.029221 containerd[1670]: time="2025-12-12T19:31:59.026407132Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 19:31:59.030217 kubelet[2938]: I1212 19:31:59.026831 2938 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 19:31:59.462821 systemd[1]: Created slice kubepods-besteffort-pod74c13dfd_88b7_45cf_a694_039b4c718820.slice - libcontainer container kubepods-besteffort-pod74c13dfd_88b7_45cf_a694_039b4c718820.slice. Dec 12 19:31:59.537303 kubelet[2938]: I1212 19:31:59.537042 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74c13dfd-88b7-45cf-a694-039b4c718820-xtables-lock\") pod \"kube-proxy-nfbjn\" (UID: \"74c13dfd-88b7-45cf-a694-039b4c718820\") " pod="kube-system/kube-proxy-nfbjn" Dec 12 19:31:59.537303 kubelet[2938]: I1212 19:31:59.537123 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/74c13dfd-88b7-45cf-a694-039b4c718820-kube-proxy\") pod \"kube-proxy-nfbjn\" (UID: \"74c13dfd-88b7-45cf-a694-039b4c718820\") " pod="kube-system/kube-proxy-nfbjn" Dec 12 19:31:59.537303 kubelet[2938]: I1212 19:31:59.537163 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74c13dfd-88b7-45cf-a694-039b4c718820-lib-modules\") pod \"kube-proxy-nfbjn\" (UID: \"74c13dfd-88b7-45cf-a694-039b4c718820\") " pod="kube-system/kube-proxy-nfbjn" Dec 12 19:31:59.537303 kubelet[2938]: I1212 19:31:59.537200 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tf2b\" (UniqueName: \"kubernetes.io/projected/74c13dfd-88b7-45cf-a694-039b4c718820-kube-api-access-5tf2b\") pod \"kube-proxy-nfbjn\" (UID: \"74c13dfd-88b7-45cf-a694-039b4c718820\") " pod="kube-system/kube-proxy-nfbjn" Dec 12 19:31:59.777891 containerd[1670]: time="2025-12-12T19:31:59.776936601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nfbjn,Uid:74c13dfd-88b7-45cf-a694-039b4c718820,Namespace:kube-system,Attempt:0,}" Dec 12 19:31:59.797150 containerd[1670]: time="2025-12-12T19:31:59.797109737Z" level=info msg="connecting to shim 8abb3e93476d99c2c003d3af6bebf56d5a5e32e2b95997f2df6f9506f35a3beb" address="unix:///run/containerd/s/ea06d2df3236fd7c04dcf98ee3584a58c8b714dd7eb95b2964f02584f6039784" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:31:59.833687 systemd[1]: Started cri-containerd-8abb3e93476d99c2c003d3af6bebf56d5a5e32e2b95997f2df6f9506f35a3beb.scope - libcontainer container 8abb3e93476d99c2c003d3af6bebf56d5a5e32e2b95997f2df6f9506f35a3beb. Dec 12 19:31:59.850781 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 12 19:31:59.850924 kernel: audit: type=1334 audit(1765567919.845:441): prog-id=137 op=LOAD Dec 12 19:31:59.845000 audit: BPF prog-id=137 op=LOAD Dec 12 19:31:59.850000 audit: BPF prog-id=138 op=LOAD Dec 12 19:31:59.850000 audit[3023]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3013 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:59.853710 kernel: audit: type=1334 audit(1765567919.850:442): prog-id=138 op=LOAD Dec 12 19:31:59.853769 kernel: audit: type=1300 audit(1765567919.850:442): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3013 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:59.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861626233653933343736643939633263303033643361663662656266 Dec 12 19:31:59.856599 kernel: audit: type=1327 audit(1765567919.850:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861626233653933343736643939633263303033643361663662656266 Dec 12 19:31:59.858677 kernel: audit: type=1334 audit(1765567919.850:443): prog-id=138 op=UNLOAD Dec 12 19:31:59.850000 audit: BPF prog-id=138 op=UNLOAD Dec 12 19:31:59.850000 audit[3023]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:59.860080 kernel: audit: type=1300 audit(1765567919.850:443): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:59.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861626233653933343736643939633263303033643361663662656266 Dec 12 19:31:59.862963 kernel: audit: type=1327 audit(1765567919.850:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861626233653933343736643939633263303033643361663662656266 Dec 12 19:31:59.850000 audit: BPF prog-id=139 op=LOAD Dec 12 19:31:59.865621 kernel: audit: type=1334 audit(1765567919.850:444): prog-id=139 op=LOAD Dec 12 19:31:59.865651 kernel: audit: type=1300 audit(1765567919.850:444): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3013 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:59.850000 audit[3023]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3013 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:59.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861626233653933343736643939633263303033643361663662656266 Dec 12 19:31:59.876895 kernel: audit: type=1327 audit(1765567919.850:444): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861626233653933343736643939633263303033643361663662656266 Dec 12 19:31:59.850000 audit: BPF prog-id=140 op=LOAD Dec 12 19:31:59.850000 audit[3023]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3013 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:59.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861626233653933343736643939633263303033643361663662656266 Dec 12 19:31:59.850000 audit: BPF prog-id=140 op=UNLOAD Dec 12 19:31:59.850000 audit[3023]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:59.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861626233653933343736643939633263303033643361663662656266 Dec 12 19:31:59.850000 audit: BPF prog-id=139 op=UNLOAD Dec 12 19:31:59.850000 audit[3023]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:59.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861626233653933343736643939633263303033643361663662656266 Dec 12 19:31:59.850000 audit: BPF prog-id=141 op=LOAD Dec 12 19:31:59.850000 audit[3023]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3013 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:31:59.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861626233653933343736643939633263303033643361663662656266 Dec 12 19:31:59.891500 containerd[1670]: time="2025-12-12T19:31:59.891244561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nfbjn,Uid:74c13dfd-88b7-45cf-a694-039b4c718820,Namespace:kube-system,Attempt:0,} returns sandbox id \"8abb3e93476d99c2c003d3af6bebf56d5a5e32e2b95997f2df6f9506f35a3beb\"" Dec 12 19:31:59.899747 containerd[1670]: time="2025-12-12T19:31:59.899696480Z" level=info msg="CreateContainer within sandbox \"8abb3e93476d99c2c003d3af6bebf56d5a5e32e2b95997f2df6f9506f35a3beb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 19:31:59.924552 containerd[1670]: time="2025-12-12T19:31:59.924502939Z" level=info msg="Container e24ba11fb1d2f667a6e3a93d02526e490cf5365b264a7ca8f40d42b039be6514: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:31:59.928564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount106593705.mount: Deactivated successfully. Dec 12 19:31:59.935614 containerd[1670]: time="2025-12-12T19:31:59.935568950Z" level=info msg="CreateContainer within sandbox \"8abb3e93476d99c2c003d3af6bebf56d5a5e32e2b95997f2df6f9506f35a3beb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e24ba11fb1d2f667a6e3a93d02526e490cf5365b264a7ca8f40d42b039be6514\"" Dec 12 19:31:59.936656 containerd[1670]: time="2025-12-12T19:31:59.936619765Z" level=info msg="StartContainer for \"e24ba11fb1d2f667a6e3a93d02526e490cf5365b264a7ca8f40d42b039be6514\"" Dec 12 19:31:59.940862 containerd[1670]: time="2025-12-12T19:31:59.940792182Z" level=info msg="connecting to shim e24ba11fb1d2f667a6e3a93d02526e490cf5365b264a7ca8f40d42b039be6514" address="unix:///run/containerd/s/ea06d2df3236fd7c04dcf98ee3584a58c8b714dd7eb95b2964f02584f6039784" protocol=ttrpc version=3 Dec 12 19:31:59.963810 systemd[1]: Started cri-containerd-e24ba11fb1d2f667a6e3a93d02526e490cf5365b264a7ca8f40d42b039be6514.scope - libcontainer container e24ba11fb1d2f667a6e3a93d02526e490cf5365b264a7ca8f40d42b039be6514. Dec 12 19:32:00.031000 audit: BPF prog-id=142 op=LOAD Dec 12 19:32:00.031000 audit[3049]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3013 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532346261313166623164326636363761366533613933643032353236 Dec 12 19:32:00.031000 audit: BPF prog-id=143 op=LOAD Dec 12 19:32:00.031000 audit[3049]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3013 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532346261313166623164326636363761366533613933643032353236 Dec 12 19:32:00.032000 audit: BPF prog-id=143 op=UNLOAD Dec 12 19:32:00.032000 audit[3049]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532346261313166623164326636363761366533613933643032353236 Dec 12 19:32:00.033000 audit: BPF prog-id=142 op=UNLOAD Dec 12 19:32:00.033000 audit[3049]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3013 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532346261313166623164326636363761366533613933643032353236 Dec 12 19:32:00.033000 audit: BPF prog-id=144 op=LOAD Dec 12 19:32:00.033000 audit[3049]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3013 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532346261313166623164326636363761366533613933643032353236 Dec 12 19:32:00.061988 containerd[1670]: time="2025-12-12T19:32:00.061950175Z" level=info msg="StartContainer for \"e24ba11fb1d2f667a6e3a93d02526e490cf5365b264a7ca8f40d42b039be6514\" returns successfully" Dec 12 19:32:00.210342 systemd[1]: Created slice kubepods-besteffort-pod005e9b84_103b_4896_afc8_de93b47f1122.slice - libcontainer container kubepods-besteffort-pod005e9b84_103b_4896_afc8_de93b47f1122.slice. Dec 12 19:32:00.241967 kubelet[2938]: I1212 19:32:00.241903 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/005e9b84-103b-4896-afc8-de93b47f1122-var-lib-calico\") pod \"tigera-operator-7dcd859c48-kcp6x\" (UID: \"005e9b84-103b-4896-afc8-de93b47f1122\") " pod="tigera-operator/tigera-operator-7dcd859c48-kcp6x" Dec 12 19:32:00.243500 kubelet[2938]: I1212 19:32:00.243358 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kf5h\" (UniqueName: \"kubernetes.io/projected/005e9b84-103b-4896-afc8-de93b47f1122-kube-api-access-2kf5h\") pod \"tigera-operator-7dcd859c48-kcp6x\" (UID: \"005e9b84-103b-4896-afc8-de93b47f1122\") " pod="tigera-operator/tigera-operator-7dcd859c48-kcp6x" Dec 12 19:32:00.425000 audit[3114]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.425000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe464df500 a2=0 a3=7ffe464df4ec items=0 ppid=3062 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.425000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 19:32:00.427000 audit[3115]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.427000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9c009190 a2=0 a3=7ffe9c00917c items=0 ppid=3062 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.427000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 19:32:00.428000 audit[3117]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.428000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc84722090 a2=0 a3=7ffc8472207c items=0 ppid=3062 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.428000 audit[3118]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.428000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd67e1e20 a2=0 a3=7fffd67e1e0c items=0 ppid=3062 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.428000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 19:32:00.428000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 19:32:00.430000 audit[3120]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.430000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1d842870 a2=0 a3=7ffe1d84285c items=0 ppid=3062 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.430000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 19:32:00.431000 audit[3121]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.431000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc95f17ed0 a2=0 a3=7ffc95f17ebc items=0 ppid=3062 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.431000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 19:32:00.516196 containerd[1670]: time="2025-12-12T19:32:00.516145414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kcp6x,Uid:005e9b84-103b-4896-afc8-de93b47f1122,Namespace:tigera-operator,Attempt:0,}" Dec 12 19:32:00.533758 containerd[1670]: time="2025-12-12T19:32:00.533703038Z" level=info msg="connecting to shim f03645477d950959f989886a3f3b6f5de019769cb061ed094b7b17cefc136267" address="unix:///run/containerd/s/4a78bf743449d7d5c03bc839d5fe3bf6599b0788b25e4b459470486f6231a778" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:32:00.542000 audit[3146]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.542000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe2b6608e0 a2=0 a3=7ffe2b6608cc items=0 ppid=3062 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.542000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 19:32:00.548000 audit[3153]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.548000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffedf5fc8c0 a2=0 a3=7ffedf5fc8ac items=0 ppid=3062 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.548000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 12 19:32:00.557000 audit[3161]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.557000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffc312c4e0 a2=0 a3=7fffc312c4cc items=0 ppid=3062 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.557000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 12 19:32:00.561000 audit[3162]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.561000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6609c900 a2=0 a3=7ffe6609c8ec items=0 ppid=3062 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.561000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 19:32:00.565000 audit[3165]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.565000 audit[3165]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd8cde0030 a2=0 a3=7ffd8cde001c items=0 ppid=3062 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.565000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 19:32:00.567000 audit[3166]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.567000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6ccd4bf0 a2=0 a3=7fff6ccd4bdc items=0 ppid=3062 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.567000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 19:32:00.569697 systemd[1]: Started cri-containerd-f03645477d950959f989886a3f3b6f5de019769cb061ed094b7b17cefc136267.scope - libcontainer container f03645477d950959f989886a3f3b6f5de019769cb061ed094b7b17cefc136267. Dec 12 19:32:00.572000 audit[3168]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.572000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd5ae000e0 a2=0 a3=7ffd5ae000cc items=0 ppid=3062 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.572000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 19:32:00.580000 audit[3173]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.580000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc8f737130 a2=0 a3=7ffc8f73711c items=0 ppid=3062 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 12 19:32:00.582000 audit[3178]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.582000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe94562050 a2=0 a3=7ffe9456203c items=0 ppid=3062 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.582000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 19:32:00.586000 audit[3181]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.586000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc4abea320 a2=0 a3=7ffc4abea30c items=0 ppid=3062 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.586000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 19:32:00.588000 audit[3182]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.588000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcff63fdc0 a2=0 a3=7ffcff63fdac items=0 ppid=3062 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 19:32:00.591000 audit[3184]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.591000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffccf3e6b60 a2=0 a3=7ffccf3e6b4c items=0 ppid=3062 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.591000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 19:32:00.596000 audit: BPF prog-id=145 op=LOAD Dec 12 19:32:00.597000 audit: BPF prog-id=146 op=LOAD Dec 12 19:32:00.597000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3134 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630333634353437376439353039353966393839383836613366336236 Dec 12 19:32:00.597000 audit: BPF prog-id=146 op=UNLOAD Dec 12 19:32:00.597000 audit[3145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630333634353437376439353039353966393839383836613366336236 Dec 12 19:32:00.597000 audit[3187]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.597000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffef34e0b00 a2=0 a3=7ffef34e0aec items=0 ppid=3062 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.597000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 19:32:00.598000 audit: BPF prog-id=147 op=LOAD Dec 12 19:32:00.598000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3134 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630333634353437376439353039353966393839383836613366336236 Dec 12 19:32:00.598000 audit: BPF prog-id=148 op=LOAD Dec 12 19:32:00.598000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3134 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630333634353437376439353039353966393839383836613366336236 Dec 12 19:32:00.598000 audit: BPF prog-id=148 op=UNLOAD Dec 12 19:32:00.598000 audit[3145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630333634353437376439353039353966393839383836613366336236 Dec 12 19:32:00.598000 audit: BPF prog-id=147 op=UNLOAD Dec 12 19:32:00.598000 audit[3145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630333634353437376439353039353966393839383836613366336236 Dec 12 19:32:00.598000 audit: BPF prog-id=149 op=LOAD Dec 12 19:32:00.598000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3134 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630333634353437376439353039353966393839383836613366336236 Dec 12 19:32:00.607000 audit[3190]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.607000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc037e3e00 a2=0 a3=7ffc037e3dec items=0 ppid=3062 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.607000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 19:32:00.609000 audit[3191]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.609000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe80d49c90 a2=0 a3=7ffe80d49c7c items=0 ppid=3062 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.609000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 19:32:00.613000 audit[3193]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.613000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc9ac4a010 a2=0 a3=7ffc9ac49ffc items=0 ppid=3062 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 19:32:00.617000 audit[3196]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.617000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd4eeca670 a2=0 a3=7ffd4eeca65c items=0 ppid=3062 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.617000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 19:32:00.619000 audit[3197]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.619000 audit[3197]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee05b0e60 a2=0 a3=7ffee05b0e4c items=0 ppid=3062 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 19:32:00.623000 audit[3199]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 19:32:00.623000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe8b26db60 a2=0 a3=7ffe8b26db4c items=0 ppid=3062 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.623000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 19:32:00.653000 audit[3205]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:00.653000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd2b3b8f10 a2=0 a3=7ffd2b3b8efc items=0 ppid=3062 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.653000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:00.656796 containerd[1670]: time="2025-12-12T19:32:00.656718277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kcp6x,Uid:005e9b84-103b-4896-afc8-de93b47f1122,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f03645477d950959f989886a3f3b6f5de019769cb061ed094b7b17cefc136267\"" Dec 12 19:32:00.660499 containerd[1670]: time="2025-12-12T19:32:00.660462337Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 19:32:00.662000 audit[3205]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:00.662000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd2b3b8f10 a2=0 a3=7ffd2b3b8efc items=0 ppid=3062 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.662000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:00.674000 audit[3217]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3217 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.674000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe93b58200 a2=0 a3=7ffe93b581ec items=0 ppid=3062 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.674000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 19:32:00.685000 audit[3219]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3219 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.685000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcbd8c2e20 a2=0 a3=7ffcbd8c2e0c items=0 ppid=3062 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 12 19:32:00.691000 audit[3222]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.691000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffca4107280 a2=0 a3=7ffca410726c items=0 ppid=3062 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.691000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 12 19:32:00.692000 audit[3223]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.692000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa6864ca0 a2=0 a3=7fffa6864c8c items=0 ppid=3062 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.692000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 19:32:00.695000 audit[3225]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.695000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc81392020 a2=0 a3=7ffc8139200c items=0 ppid=3062 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.695000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 19:32:00.697000 audit[3226]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.697000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4c0d3bb0 a2=0 a3=7fff4c0d3b9c items=0 ppid=3062 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.697000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 19:32:00.701000 audit[3228]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.701000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc3a350940 a2=0 a3=7ffc3a35092c items=0 ppid=3062 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.701000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 12 19:32:00.711000 audit[3231]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.711000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe0c90dc50 a2=0 a3=7ffe0c90dc3c items=0 ppid=3062 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.711000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 19:32:00.713000 audit[3232]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.713000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8fa8c4c0 a2=0 a3=7ffe8fa8c4ac items=0 ppid=3062 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.713000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 19:32:00.721000 audit[3234]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.721000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd7255bd50 a2=0 a3=7ffd7255bd3c items=0 ppid=3062 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.721000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 19:32:00.723000 audit[3235]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.723000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed601fec0 a2=0 a3=7ffed601feac items=0 ppid=3062 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.723000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 19:32:00.729000 audit[3237]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.729000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdc2de8920 a2=0 a3=7ffdc2de890c items=0 ppid=3062 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.729000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 19:32:00.736000 audit[3240]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.736000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe08df7140 a2=0 a3=7ffe08df712c items=0 ppid=3062 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.736000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 19:32:00.743000 audit[3243]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3243 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.743000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc9a5182e0 a2=0 a3=7ffc9a5182cc items=0 ppid=3062 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.743000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 12 19:32:00.745000 audit[3244]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.745000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe8ced47e0 a2=0 a3=7ffe8ced47cc items=0 ppid=3062 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.745000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 19:32:00.750000 audit[3246]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.750000 audit[3246]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe8b645710 a2=0 a3=7ffe8b6456fc items=0 ppid=3062 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.750000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 19:32:00.759000 audit[3249]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.759000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd43287200 a2=0 a3=7ffd432871ec items=0 ppid=3062 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.759000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 19:32:00.761000 audit[3250]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.761000 audit[3250]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce3b41040 a2=0 a3=7ffce3b4102c items=0 ppid=3062 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.761000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 19:32:00.764000 audit[3252]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3252 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.764000 audit[3252]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd6cb90290 a2=0 a3=7ffd6cb9027c items=0 ppid=3062 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.764000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 19:32:00.765000 audit[3253]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.765000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc913cdcb0 a2=0 a3=7ffc913cdc9c items=0 ppid=3062 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.765000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 19:32:00.768000 audit[3255]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.768000 audit[3255]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe96063640 a2=0 a3=7ffe9606362c items=0 ppid=3062 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.768000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 19:32:00.773000 audit[3258]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3258 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 19:32:00.773000 audit[3258]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd065804f0 a2=0 a3=7ffd065804dc items=0 ppid=3062 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 19:32:00.777000 audit[3260]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 19:32:00.777000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff40636550 a2=0 a3=7fff4063653c items=0 ppid=3062 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.777000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:00.778000 audit[3260]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 19:32:00.778000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff40636550 a2=0 a3=7fff4063653c items=0 ppid=3062 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:00.778000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:03.138468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177467975.mount: Deactivated successfully. Dec 12 19:32:03.810743 containerd[1670]: time="2025-12-12T19:32:03.810683967Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:32:03.811472 containerd[1670]: time="2025-12-12T19:32:03.811418707Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 12 19:32:03.812146 containerd[1670]: time="2025-12-12T19:32:03.812120691Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:32:03.814001 containerd[1670]: time="2025-12-12T19:32:03.813972975Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:32:03.815073 containerd[1670]: time="2025-12-12T19:32:03.814960307Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.154452291s" Dec 12 19:32:03.815073 containerd[1670]: time="2025-12-12T19:32:03.814989798Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 19:32:03.821192 containerd[1670]: time="2025-12-12T19:32:03.821139676Z" level=info msg="CreateContainer within sandbox \"f03645477d950959f989886a3f3b6f5de019769cb061ed094b7b17cefc136267\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 19:32:03.829214 containerd[1670]: time="2025-12-12T19:32:03.828978271Z" level=info msg="Container a9cb2b8ded8b721ed28543b58e7cacc29e683748760813b8dbaf07ae59cc7b48: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:32:03.833504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2023956231.mount: Deactivated successfully. Dec 12 19:32:03.855475 containerd[1670]: time="2025-12-12T19:32:03.855399868Z" level=info msg="CreateContainer within sandbox \"f03645477d950959f989886a3f3b6f5de019769cb061ed094b7b17cefc136267\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a9cb2b8ded8b721ed28543b58e7cacc29e683748760813b8dbaf07ae59cc7b48\"" Dec 12 19:32:03.859346 containerd[1670]: time="2025-12-12T19:32:03.857764134Z" level=info msg="StartContainer for \"a9cb2b8ded8b721ed28543b58e7cacc29e683748760813b8dbaf07ae59cc7b48\"" Dec 12 19:32:03.859346 containerd[1670]: time="2025-12-12T19:32:03.859021380Z" level=info msg="connecting to shim a9cb2b8ded8b721ed28543b58e7cacc29e683748760813b8dbaf07ae59cc7b48" address="unix:///run/containerd/s/4a78bf743449d7d5c03bc839d5fe3bf6599b0788b25e4b459470486f6231a778" protocol=ttrpc version=3 Dec 12 19:32:03.886688 systemd[1]: Started cri-containerd-a9cb2b8ded8b721ed28543b58e7cacc29e683748760813b8dbaf07ae59cc7b48.scope - libcontainer container a9cb2b8ded8b721ed28543b58e7cacc29e683748760813b8dbaf07ae59cc7b48. Dec 12 19:32:03.899000 audit: BPF prog-id=150 op=LOAD Dec 12 19:32:03.900000 audit: BPF prog-id=151 op=LOAD Dec 12 19:32:03.900000 audit[3269]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3134 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:03.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139636232623864656438623732316564323835343362353865376361 Dec 12 19:32:03.900000 audit: BPF prog-id=151 op=UNLOAD Dec 12 19:32:03.900000 audit[3269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:03.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139636232623864656438623732316564323835343362353865376361 Dec 12 19:32:03.900000 audit: BPF prog-id=152 op=LOAD Dec 12 19:32:03.900000 audit[3269]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3134 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:03.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139636232623864656438623732316564323835343362353865376361 Dec 12 19:32:03.900000 audit: BPF prog-id=153 op=LOAD Dec 12 19:32:03.900000 audit[3269]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3134 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:03.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139636232623864656438623732316564323835343362353865376361 Dec 12 19:32:03.900000 audit: BPF prog-id=153 op=UNLOAD Dec 12 19:32:03.900000 audit[3269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:03.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139636232623864656438623732316564323835343362353865376361 Dec 12 19:32:03.900000 audit: BPF prog-id=152 op=UNLOAD Dec 12 19:32:03.900000 audit[3269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:03.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139636232623864656438623732316564323835343362353865376361 Dec 12 19:32:03.900000 audit: BPF prog-id=154 op=LOAD Dec 12 19:32:03.900000 audit[3269]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3134 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:03.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139636232623864656438623732316564323835343362353865376361 Dec 12 19:32:03.927792 containerd[1670]: time="2025-12-12T19:32:03.927722807Z" level=info msg="StartContainer for \"a9cb2b8ded8b721ed28543b58e7cacc29e683748760813b8dbaf07ae59cc7b48\" returns successfully" Dec 12 19:32:04.347689 kubelet[2938]: I1212 19:32:04.347523 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nfbjn" podStartSLOduration=5.347496631 podStartE2EDuration="5.347496631s" podCreationTimestamp="2025-12-12 19:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 19:32:00.332021238 +0000 UTC m=+5.350173276" watchObservedRunningTime="2025-12-12 19:32:04.347496631 +0000 UTC m=+9.365648553" Dec 12 19:32:07.277033 systemd[1]: cri-containerd-a9cb2b8ded8b721ed28543b58e7cacc29e683748760813b8dbaf07ae59cc7b48.scope: Deactivated successfully. Dec 12 19:32:07.305321 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 12 19:32:07.309958 kernel: audit: type=1334 audit(1765567927.298:521): prog-id=150 op=UNLOAD Dec 12 19:32:07.298000 audit: BPF prog-id=150 op=UNLOAD Dec 12 19:32:07.311762 kernel: audit: type=1334 audit(1765567927.298:522): prog-id=154 op=UNLOAD Dec 12 19:32:07.298000 audit: BPF prog-id=154 op=UNLOAD Dec 12 19:32:07.321121 containerd[1670]: time="2025-12-12T19:32:07.319882609Z" level=info msg="received container exit event container_id:\"a9cb2b8ded8b721ed28543b58e7cacc29e683748760813b8dbaf07ae59cc7b48\" id:\"a9cb2b8ded8b721ed28543b58e7cacc29e683748760813b8dbaf07ae59cc7b48\" pid:3282 exit_status:1 exited_at:{seconds:1765567927 nanos:282695076}" Dec 12 19:32:07.383995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9cb2b8ded8b721ed28543b58e7cacc29e683748760813b8dbaf07ae59cc7b48-rootfs.mount: Deactivated successfully. Dec 12 19:32:07.961502 kubelet[2938]: I1212 19:32:07.961153 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-kcp6x" podStartSLOduration=4.802465607 podStartE2EDuration="7.961074076s" podCreationTimestamp="2025-12-12 19:32:00 +0000 UTC" firstStartedPulling="2025-12-12 19:32:00.658977931 +0000 UTC m=+5.677129829" lastFinishedPulling="2025-12-12 19:32:03.8175864 +0000 UTC m=+8.835738298" observedRunningTime="2025-12-12 19:32:04.350558261 +0000 UTC m=+9.368710209" watchObservedRunningTime="2025-12-12 19:32:07.961074076 +0000 UTC m=+12.979225995" Dec 12 19:32:08.381466 kubelet[2938]: I1212 19:32:08.381134 2938 scope.go:117] "RemoveContainer" containerID="a9cb2b8ded8b721ed28543b58e7cacc29e683748760813b8dbaf07ae59cc7b48" Dec 12 19:32:08.386212 containerd[1670]: time="2025-12-12T19:32:08.386105460Z" level=info msg="CreateContainer within sandbox \"f03645477d950959f989886a3f3b6f5de019769cb061ed094b7b17cefc136267\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 12 19:32:08.400076 containerd[1670]: time="2025-12-12T19:32:08.396526230Z" level=info msg="Container ad547214dccd960d804c7341a58062002207a8578e52719111a867fba3d11405: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:32:08.406788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540512092.mount: Deactivated successfully. Dec 12 19:32:08.421932 containerd[1670]: time="2025-12-12T19:32:08.421817699Z" level=info msg="CreateContainer within sandbox \"f03645477d950959f989886a3f3b6f5de019769cb061ed094b7b17cefc136267\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ad547214dccd960d804c7341a58062002207a8578e52719111a867fba3d11405\"" Dec 12 19:32:08.424368 containerd[1670]: time="2025-12-12T19:32:08.424331156Z" level=info msg="StartContainer for \"ad547214dccd960d804c7341a58062002207a8578e52719111a867fba3d11405\"" Dec 12 19:32:08.425954 containerd[1670]: time="2025-12-12T19:32:08.425919125Z" level=info msg="connecting to shim ad547214dccd960d804c7341a58062002207a8578e52719111a867fba3d11405" address="unix:///run/containerd/s/4a78bf743449d7d5c03bc839d5fe3bf6599b0788b25e4b459470486f6231a778" protocol=ttrpc version=3 Dec 12 19:32:08.470825 systemd[1]: Started cri-containerd-ad547214dccd960d804c7341a58062002207a8578e52719111a867fba3d11405.scope - libcontainer container ad547214dccd960d804c7341a58062002207a8578e52719111a867fba3d11405. Dec 12 19:32:08.501000 audit: BPF prog-id=155 op=LOAD Dec 12 19:32:08.510984 kernel: audit: type=1334 audit(1765567928.501:523): prog-id=155 op=LOAD Dec 12 19:32:08.510000 audit: BPF prog-id=156 op=LOAD Dec 12 19:32:08.515617 kernel: audit: type=1334 audit(1765567928.510:524): prog-id=156 op=LOAD Dec 12 19:32:08.515692 kernel: audit: type=1300 audit(1765567928.510:524): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3134 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:08.510000 audit[3342]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3134 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:08.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164353437323134646363643936306438303463373334316135383036 Dec 12 19:32:08.526483 kernel: audit: type=1327 audit(1765567928.510:524): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164353437323134646363643936306438303463373334316135383036 Dec 12 19:32:08.510000 audit: BPF prog-id=156 op=UNLOAD Dec 12 19:32:08.510000 audit[3342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:08.531836 kernel: audit: type=1334 audit(1765567928.510:525): prog-id=156 op=UNLOAD Dec 12 19:32:08.531906 kernel: audit: type=1300 audit(1765567928.510:525): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:08.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164353437323134646363643936306438303463373334316135383036 Dec 12 19:32:08.510000 audit: BPF prog-id=157 op=LOAD Dec 12 19:32:08.540470 kernel: audit: type=1327 audit(1765567928.510:525): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164353437323134646363643936306438303463373334316135383036 Dec 12 19:32:08.540570 kernel: audit: type=1334 audit(1765567928.510:526): prog-id=157 op=LOAD Dec 12 19:32:08.510000 audit[3342]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3134 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:08.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164353437323134646363643936306438303463373334316135383036 Dec 12 19:32:08.511000 audit: BPF prog-id=158 op=LOAD Dec 12 19:32:08.511000 audit[3342]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3134 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:08.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164353437323134646363643936306438303463373334316135383036 Dec 12 19:32:08.511000 audit: BPF prog-id=158 op=UNLOAD Dec 12 19:32:08.511000 audit[3342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:08.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164353437323134646363643936306438303463373334316135383036 Dec 12 19:32:08.511000 audit: BPF prog-id=157 op=UNLOAD Dec 12 19:32:08.511000 audit[3342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3134 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:08.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164353437323134646363643936306438303463373334316135383036 Dec 12 19:32:08.511000 audit: BPF prog-id=159 op=LOAD Dec 12 19:32:08.511000 audit[3342]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3134 pid=3342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:08.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164353437323134646363643936306438303463373334316135383036 Dec 12 19:32:08.586367 containerd[1670]: time="2025-12-12T19:32:08.586318394Z" level=info msg="StartContainer for \"ad547214dccd960d804c7341a58062002207a8578e52719111a867fba3d11405\" returns successfully" Dec 12 19:32:09.439179 sudo[1944]: pam_unix(sudo:session): session closed for user root Dec 12 19:32:09.437000 audit[1944]: USER_END pid=1944 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 19:32:09.438000 audit[1944]: CRED_DISP pid=1944 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 19:32:09.587014 sshd[1943]: Connection closed by 139.178.89.65 port 56056 Dec 12 19:32:09.588758 sshd-session[1940]: pam_unix(sshd:session): session closed for user core Dec 12 19:32:09.591000 audit[1940]: USER_END pid=1940 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:32:09.591000 audit[1940]: CRED_DISP pid=1940 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:32:09.597280 systemd-logind[1642]: Session 9 logged out. Waiting for processes to exit. Dec 12 19:32:09.598653 systemd[1]: sshd@6-10.244.101.34:22-139.178.89.65:56056.service: Deactivated successfully. Dec 12 19:32:09.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.244.101.34:22-139.178.89.65:56056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:32:09.602334 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 19:32:09.603142 systemd[1]: session-9.scope: Consumed 6.750s CPU time, 151.7M memory peak. Dec 12 19:32:09.606414 systemd-logind[1642]: Removed session 9. Dec 12 19:32:13.341000 audit[3396]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:13.346685 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 12 19:32:13.346956 kernel: audit: type=1325 audit(1765567933.341:536): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:13.341000 audit[3396]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff33035da0 a2=0 a3=7fff33035d8c items=0 ppid=3062 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:13.358509 kernel: audit: type=1300 audit(1765567933.341:536): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff33035da0 a2=0 a3=7fff33035d8c items=0 ppid=3062 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:13.341000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:13.362505 kernel: audit: type=1327 audit(1765567933.341:536): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:13.356000 audit[3396]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:13.364461 kernel: audit: type=1325 audit(1765567933.356:537): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:13.356000 audit[3396]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff33035da0 a2=0 a3=0 items=0 ppid=3062 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:13.369461 kernel: audit: type=1300 audit(1765567933.356:537): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff33035da0 a2=0 a3=0 items=0 ppid=3062 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:13.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:13.371465 kernel: audit: type=1327 audit(1765567933.356:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:13.381000 audit[3398]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:13.387576 kernel: audit: type=1325 audit(1765567933.381:538): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:13.381000 audit[3398]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff628a9ab0 a2=0 a3=7fff628a9a9c items=0 ppid=3062 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:13.393471 kernel: audit: type=1300 audit(1765567933.381:538): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff628a9ab0 a2=0 a3=7fff628a9a9c items=0 ppid=3062 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:13.381000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:13.397466 kernel: audit: type=1327 audit(1765567933.381:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:13.388000 audit[3398]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:13.399467 kernel: audit: type=1325 audit(1765567933.388:539): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:13.388000 audit[3398]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff628a9ab0 a2=0 a3=0 items=0 ppid=3062 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:13.388000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:16.492000 audit[3403]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:16.492000 audit[3403]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffef297e990 a2=0 a3=7ffef297e97c items=0 ppid=3062 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:16.492000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:16.498000 audit[3403]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:16.498000 audit[3403]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffef297e990 a2=0 a3=0 items=0 ppid=3062 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:16.498000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:16.522000 audit[3405]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3405 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:16.522000 audit[3405]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffe3d8c6c0 a2=0 a3=7fffe3d8c6ac items=0 ppid=3062 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:16.522000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:16.527000 audit[3405]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3405 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:16.527000 audit[3405]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffe3d8c6c0 a2=0 a3=0 items=0 ppid=3062 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:16.527000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:17.552000 audit[3407]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:17.552000 audit[3407]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd0bb521d0 a2=0 a3=7ffd0bb521bc items=0 ppid=3062 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:17.552000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:17.556000 audit[3407]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:17.556000 audit[3407]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd0bb521d0 a2=0 a3=0 items=0 ppid=3062 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:17.556000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:18.963000 audit[3409]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:18.969016 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 12 19:32:18.969171 kernel: audit: type=1325 audit(1765567938.963:546): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:18.963000 audit[3409]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcc648aae0 a2=0 a3=7ffcc648aacc items=0 ppid=3062 pid=3409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:18.978703 kernel: audit: type=1300 audit(1765567938.963:546): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcc648aae0 a2=0 a3=7ffcc648aacc items=0 ppid=3062 pid=3409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:18.963000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:18.982521 kernel: audit: type=1327 audit(1765567938.963:546): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:18.982603 kernel: audit: type=1325 audit(1765567938.980:547): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:18.980000 audit[3409]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:18.980000 audit[3409]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc648aae0 a2=0 a3=0 items=0 ppid=3062 pid=3409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.003489 kernel: audit: type=1300 audit(1765567938.980:547): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc648aae0 a2=0 a3=0 items=0 ppid=3062 pid=3409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:18.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:19.011471 kernel: audit: type=1327 audit(1765567938.980:547): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:19.024277 systemd[1]: Created slice kubepods-besteffort-pod6bd39512_9981_4271_b926_dde81d56d85c.slice - libcontainer container kubepods-besteffort-pod6bd39512_9981_4271_b926_dde81d56d85c.slice. Dec 12 19:32:19.092900 kubelet[2938]: I1212 19:32:19.092673 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bd39512-9981-4271-b926-dde81d56d85c-tigera-ca-bundle\") pod \"calico-typha-6554768d75-fs49s\" (UID: \"6bd39512-9981-4271-b926-dde81d56d85c\") " pod="calico-system/calico-typha-6554768d75-fs49s" Dec 12 19:32:19.093722 kubelet[2938]: I1212 19:32:19.093544 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6bd39512-9981-4271-b926-dde81d56d85c-typha-certs\") pod \"calico-typha-6554768d75-fs49s\" (UID: \"6bd39512-9981-4271-b926-dde81d56d85c\") " pod="calico-system/calico-typha-6554768d75-fs49s" Dec 12 19:32:19.093722 kubelet[2938]: I1212 19:32:19.093575 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntrwz\" (UniqueName: \"kubernetes.io/projected/6bd39512-9981-4271-b926-dde81d56d85c-kube-api-access-ntrwz\") pod \"calico-typha-6554768d75-fs49s\" (UID: \"6bd39512-9981-4271-b926-dde81d56d85c\") " pod="calico-system/calico-typha-6554768d75-fs49s" Dec 12 19:32:19.198328 systemd[1]: Created slice kubepods-besteffort-podc26622b9_43bd_40a8_a6f4_d43eb5a248d2.slice - libcontainer container kubepods-besteffort-podc26622b9_43bd_40a8_a6f4_d43eb5a248d2.slice. Dec 12 19:32:19.295298 kubelet[2938]: I1212 19:32:19.294086 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c26622b9-43bd-40a8-a6f4-d43eb5a248d2-policysync\") pod \"calico-node-747jf\" (UID: \"c26622b9-43bd-40a8-a6f4-d43eb5a248d2\") " pod="calico-system/calico-node-747jf" Dec 12 19:32:19.296091 kubelet[2938]: I1212 19:32:19.296041 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c26622b9-43bd-40a8-a6f4-d43eb5a248d2-var-lib-calico\") pod \"calico-node-747jf\" (UID: \"c26622b9-43bd-40a8-a6f4-d43eb5a248d2\") " pod="calico-system/calico-node-747jf" Dec 12 19:32:19.296275 kubelet[2938]: I1212 19:32:19.296259 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c26622b9-43bd-40a8-a6f4-d43eb5a248d2-lib-modules\") pod \"calico-node-747jf\" (UID: \"c26622b9-43bd-40a8-a6f4-d43eb5a248d2\") " pod="calico-system/calico-node-747jf" Dec 12 19:32:19.296350 kubelet[2938]: I1212 19:32:19.296286 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c26622b9-43bd-40a8-a6f4-d43eb5a248d2-var-run-calico\") pod \"calico-node-747jf\" (UID: \"c26622b9-43bd-40a8-a6f4-d43eb5a248d2\") " pod="calico-system/calico-node-747jf" Dec 12 19:32:19.296350 kubelet[2938]: I1212 19:32:19.296310 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c26622b9-43bd-40a8-a6f4-d43eb5a248d2-flexvol-driver-host\") pod \"calico-node-747jf\" (UID: \"c26622b9-43bd-40a8-a6f4-d43eb5a248d2\") " pod="calico-system/calico-node-747jf" Dec 12 19:32:19.296350 kubelet[2938]: I1212 19:32:19.296336 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c26622b9-43bd-40a8-a6f4-d43eb5a248d2-cni-bin-dir\") pod \"calico-node-747jf\" (UID: \"c26622b9-43bd-40a8-a6f4-d43eb5a248d2\") " pod="calico-system/calico-node-747jf" Dec 12 19:32:19.296472 kubelet[2938]: I1212 19:32:19.296354 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4jt8\" (UniqueName: \"kubernetes.io/projected/c26622b9-43bd-40a8-a6f4-d43eb5a248d2-kube-api-access-n4jt8\") pod \"calico-node-747jf\" (UID: \"c26622b9-43bd-40a8-a6f4-d43eb5a248d2\") " pod="calico-system/calico-node-747jf" Dec 12 19:32:19.296472 kubelet[2938]: I1212 19:32:19.296371 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c26622b9-43bd-40a8-a6f4-d43eb5a248d2-xtables-lock\") pod \"calico-node-747jf\" (UID: \"c26622b9-43bd-40a8-a6f4-d43eb5a248d2\") " pod="calico-system/calico-node-747jf" Dec 12 19:32:19.296472 kubelet[2938]: I1212 19:32:19.296392 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c26622b9-43bd-40a8-a6f4-d43eb5a248d2-cni-log-dir\") pod \"calico-node-747jf\" (UID: \"c26622b9-43bd-40a8-a6f4-d43eb5a248d2\") " pod="calico-system/calico-node-747jf" Dec 12 19:32:19.296472 kubelet[2938]: I1212 19:32:19.296408 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c26622b9-43bd-40a8-a6f4-d43eb5a248d2-cni-net-dir\") pod \"calico-node-747jf\" (UID: \"c26622b9-43bd-40a8-a6f4-d43eb5a248d2\") " pod="calico-system/calico-node-747jf" Dec 12 19:32:19.296797 kubelet[2938]: I1212 19:32:19.296428 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c26622b9-43bd-40a8-a6f4-d43eb5a248d2-node-certs\") pod \"calico-node-747jf\" (UID: \"c26622b9-43bd-40a8-a6f4-d43eb5a248d2\") " pod="calico-system/calico-node-747jf" Dec 12 19:32:19.296797 kubelet[2938]: I1212 19:32:19.296766 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c26622b9-43bd-40a8-a6f4-d43eb5a248d2-tigera-ca-bundle\") pod \"calico-node-747jf\" (UID: \"c26622b9-43bd-40a8-a6f4-d43eb5a248d2\") " pod="calico-system/calico-node-747jf" Dec 12 19:32:19.334131 containerd[1670]: time="2025-12-12T19:32:19.334034838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6554768d75-fs49s,Uid:6bd39512-9981-4271-b926-dde81d56d85c,Namespace:calico-system,Attempt:0,}" Dec 12 19:32:19.405729 kubelet[2938]: E1212 19:32:19.405506 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.405729 kubelet[2938]: W1212 19:32:19.405561 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.410535 kubelet[2938]: E1212 19:32:19.410492 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.414780 kubelet[2938]: E1212 19:32:19.414746 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.415036 kubelet[2938]: W1212 19:32:19.414970 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.415267 kubelet[2938]: E1212 19:32:19.415015 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.415692 kubelet[2938]: E1212 19:32:19.415674 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.415915 kubelet[2938]: W1212 19:32:19.415901 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.415983 kubelet[2938]: E1212 19:32:19.415972 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.417058 kubelet[2938]: E1212 19:32:19.417033 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.417242 kubelet[2938]: W1212 19:32:19.417139 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.417242 kubelet[2938]: E1212 19:32:19.417157 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.417782 kubelet[2938]: E1212 19:32:19.417597 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.417782 kubelet[2938]: W1212 19:32:19.417610 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.417782 kubelet[2938]: E1212 19:32:19.417629 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.417991 kubelet[2938]: E1212 19:32:19.417982 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.418062 kubelet[2938]: W1212 19:32:19.418052 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.418154 kubelet[2938]: E1212 19:32:19.418144 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.419770 kubelet[2938]: E1212 19:32:19.419747 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.419934 kubelet[2938]: W1212 19:32:19.419848 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.419934 kubelet[2938]: E1212 19:32:19.419865 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.420195 kubelet[2938]: E1212 19:32:19.420184 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.420567 kubelet[2938]: W1212 19:32:19.420374 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.420567 kubelet[2938]: E1212 19:32:19.420390 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.421929 containerd[1670]: time="2025-12-12T19:32:19.421867438Z" level=info msg="connecting to shim ee7de69216010ecd06c4ddd94611756f7ffd4742cb702e27e006526cffa6c914" address="unix:///run/containerd/s/cc0877343a24281aa140b89ccee02f8b274faac75f1c1007b1fca4b58e5ae49f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:32:19.424537 kubelet[2938]: E1212 19:32:19.423981 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.424537 kubelet[2938]: W1212 19:32:19.424003 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.424537 kubelet[2938]: E1212 19:32:19.424023 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.451621 kubelet[2938]: E1212 19:32:19.451554 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:32:19.453015 kubelet[2938]: E1212 19:32:19.452967 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.453385 kubelet[2938]: W1212 19:32:19.453076 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.453385 kubelet[2938]: E1212 19:32:19.453103 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.453573 kubelet[2938]: E1212 19:32:19.453561 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.453766 kubelet[2938]: W1212 19:32:19.453632 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.453766 kubelet[2938]: E1212 19:32:19.453649 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.453949 kubelet[2938]: E1212 19:32:19.453940 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.454051 kubelet[2938]: W1212 19:32:19.453994 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.454051 kubelet[2938]: E1212 19:32:19.454008 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.455699 kubelet[2938]: E1212 19:32:19.455596 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.455699 kubelet[2938]: W1212 19:32:19.455614 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.455699 kubelet[2938]: E1212 19:32:19.455628 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.456273 kubelet[2938]: E1212 19:32:19.456201 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.456273 kubelet[2938]: W1212 19:32:19.456215 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.456273 kubelet[2938]: E1212 19:32:19.456227 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.456708 kubelet[2938]: E1212 19:32:19.456637 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.456708 kubelet[2938]: W1212 19:32:19.456653 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.456708 kubelet[2938]: E1212 19:32:19.456665 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.457173 kubelet[2938]: E1212 19:32:19.456996 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.457173 kubelet[2938]: W1212 19:32:19.457006 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.457173 kubelet[2938]: E1212 19:32:19.457095 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.459139 kubelet[2938]: E1212 19:32:19.459034 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.459139 kubelet[2938]: W1212 19:32:19.459049 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.459139 kubelet[2938]: E1212 19:32:19.459065 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.459658 kubelet[2938]: E1212 19:32:19.459596 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.459658 kubelet[2938]: W1212 19:32:19.459609 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.459658 kubelet[2938]: E1212 19:32:19.459620 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.460030 kubelet[2938]: E1212 19:32:19.460018 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.460707 kubelet[2938]: W1212 19:32:19.460095 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.460707 kubelet[2938]: E1212 19:32:19.460109 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.461173 kubelet[2938]: E1212 19:32:19.461091 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.461173 kubelet[2938]: W1212 19:32:19.461104 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.461173 kubelet[2938]: E1212 19:32:19.461116 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.461713 kubelet[2938]: E1212 19:32:19.461573 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.461713 kubelet[2938]: W1212 19:32:19.461589 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.461713 kubelet[2938]: E1212 19:32:19.461603 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.462002 kubelet[2938]: E1212 19:32:19.461991 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.462120 kubelet[2938]: W1212 19:32:19.462058 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.462120 kubelet[2938]: E1212 19:32:19.462082 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.462581 kubelet[2938]: E1212 19:32:19.462567 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.462758 kubelet[2938]: W1212 19:32:19.462650 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.462758 kubelet[2938]: E1212 19:32:19.462664 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.462896 kubelet[2938]: E1212 19:32:19.462888 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.462948 kubelet[2938]: W1212 19:32:19.462939 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.462997 kubelet[2938]: E1212 19:32:19.462988 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.463452 kubelet[2938]: E1212 19:32:19.463230 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.463452 kubelet[2938]: W1212 19:32:19.463241 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.463452 kubelet[2938]: E1212 19:32:19.463251 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.463841 kubelet[2938]: E1212 19:32:19.463783 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.463841 kubelet[2938]: W1212 19:32:19.463795 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.463841 kubelet[2938]: E1212 19:32:19.463806 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.464233 kubelet[2938]: E1212 19:32:19.464223 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.464369 kubelet[2938]: W1212 19:32:19.464292 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.464369 kubelet[2938]: E1212 19:32:19.464304 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.464793 kubelet[2938]: E1212 19:32:19.464735 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.464793 kubelet[2938]: W1212 19:32:19.464746 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.464793 kubelet[2938]: E1212 19:32:19.464756 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.465550 kubelet[2938]: E1212 19:32:19.465540 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.466491 kubelet[2938]: W1212 19:32:19.465611 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.466491 kubelet[2938]: E1212 19:32:19.465625 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.482530 kubelet[2938]: E1212 19:32:19.482401 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.482530 kubelet[2938]: W1212 19:32:19.482431 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.482530 kubelet[2938]: E1212 19:32:19.482473 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.501466 kubelet[2938]: E1212 19:32:19.500385 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.501928 kubelet[2938]: W1212 19:32:19.501693 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.501928 kubelet[2938]: E1212 19:32:19.501733 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.501928 kubelet[2938]: I1212 19:32:19.501775 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3228a46d-97a1-46c2-a390-a03b4bb70892-varrun\") pod \"csi-node-driver-klld4\" (UID: \"3228a46d-97a1-46c2-a390-a03b4bb70892\") " pod="calico-system/csi-node-driver-klld4" Dec 12 19:32:19.502210 kubelet[2938]: E1212 19:32:19.502169 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.502210 kubelet[2938]: W1212 19:32:19.502184 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.502210 kubelet[2938]: E1212 19:32:19.502199 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.502373 kubelet[2938]: I1212 19:32:19.502327 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3228a46d-97a1-46c2-a390-a03b4bb70892-kubelet-dir\") pod \"csi-node-driver-klld4\" (UID: \"3228a46d-97a1-46c2-a390-a03b4bb70892\") " pod="calico-system/csi-node-driver-klld4" Dec 12 19:32:19.502644 kubelet[2938]: E1212 19:32:19.502628 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.502799 kubelet[2938]: W1212 19:32:19.502759 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.502799 kubelet[2938]: E1212 19:32:19.502787 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.502932 kubelet[2938]: I1212 19:32:19.502894 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3228a46d-97a1-46c2-a390-a03b4bb70892-registration-dir\") pod \"csi-node-driver-klld4\" (UID: \"3228a46d-97a1-46c2-a390-a03b4bb70892\") " pod="calico-system/csi-node-driver-klld4" Dec 12 19:32:19.503201 kubelet[2938]: E1212 19:32:19.503159 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.503201 kubelet[2938]: W1212 19:32:19.503171 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.503201 kubelet[2938]: E1212 19:32:19.503189 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.503412 kubelet[2938]: I1212 19:32:19.503337 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3228a46d-97a1-46c2-a390-a03b4bb70892-socket-dir\") pod \"csi-node-driver-klld4\" (UID: \"3228a46d-97a1-46c2-a390-a03b4bb70892\") " pod="calico-system/csi-node-driver-klld4" Dec 12 19:32:19.503658 kubelet[2938]: E1212 19:32:19.503621 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.503658 kubelet[2938]: W1212 19:32:19.503634 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.503658 kubelet[2938]: E1212 19:32:19.503647 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.503861 kubelet[2938]: I1212 19:32:19.503845 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pl7j\" (UniqueName: \"kubernetes.io/projected/3228a46d-97a1-46c2-a390-a03b4bb70892-kube-api-access-4pl7j\") pod \"csi-node-driver-klld4\" (UID: \"3228a46d-97a1-46c2-a390-a03b4bb70892\") " pod="calico-system/csi-node-driver-klld4" Dec 12 19:32:19.504142 kubelet[2938]: E1212 19:32:19.504107 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.504142 kubelet[2938]: W1212 19:32:19.504119 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.504142 kubelet[2938]: E1212 19:32:19.504131 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.504494 kubelet[2938]: E1212 19:32:19.504460 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.504494 kubelet[2938]: W1212 19:32:19.504471 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.504494 kubelet[2938]: E1212 19:32:19.504482 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.504833 kubelet[2938]: E1212 19:32:19.504800 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.504833 kubelet[2938]: W1212 19:32:19.504811 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.504833 kubelet[2938]: E1212 19:32:19.504822 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.505358 kubelet[2938]: E1212 19:32:19.505307 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.505358 kubelet[2938]: W1212 19:32:19.505325 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.505358 kubelet[2938]: E1212 19:32:19.505340 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.505852 kubelet[2938]: E1212 19:32:19.505814 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.505852 kubelet[2938]: W1212 19:32:19.505827 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.505852 kubelet[2938]: E1212 19:32:19.505839 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.506258 kubelet[2938]: E1212 19:32:19.506219 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.506258 kubelet[2938]: W1212 19:32:19.506232 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.506258 kubelet[2938]: E1212 19:32:19.506243 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.506691 kubelet[2938]: E1212 19:32:19.506654 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.506691 kubelet[2938]: W1212 19:32:19.506666 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.506691 kubelet[2938]: E1212 19:32:19.506677 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.507042 kubelet[2938]: E1212 19:32:19.507003 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.507042 kubelet[2938]: W1212 19:32:19.507018 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.507042 kubelet[2938]: E1212 19:32:19.507028 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.507400 kubelet[2938]: E1212 19:32:19.507365 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.507400 kubelet[2938]: W1212 19:32:19.507376 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.507400 kubelet[2938]: E1212 19:32:19.507387 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.507767 kubelet[2938]: E1212 19:32:19.507727 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.507767 kubelet[2938]: W1212 19:32:19.507738 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.507767 kubelet[2938]: E1212 19:32:19.507749 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.520695 containerd[1670]: time="2025-12-12T19:32:19.520639318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-747jf,Uid:c26622b9-43bd-40a8-a6f4-d43eb5a248d2,Namespace:calico-system,Attempt:0,}" Dec 12 19:32:19.537373 systemd[1]: Started cri-containerd-ee7de69216010ecd06c4ddd94611756f7ffd4742cb702e27e006526cffa6c914.scope - libcontainer container ee7de69216010ecd06c4ddd94611756f7ffd4742cb702e27e006526cffa6c914. Dec 12 19:32:19.575048 containerd[1670]: time="2025-12-12T19:32:19.574981819Z" level=info msg="connecting to shim b3ff3e297cdd057ebf1826e250d8cdef3e324cd7bd379059b88600fa3cef38e5" address="unix:///run/containerd/s/4429146e898965112139654d8a100012215e0b5f4f64686bb314eee27522fca8" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:32:19.576000 audit: BPF prog-id=160 op=LOAD Dec 12 19:32:19.582040 kernel: audit: type=1334 audit(1765567939.576:548): prog-id=160 op=LOAD Dec 12 19:32:19.580000 audit: BPF prog-id=161 op=LOAD Dec 12 19:32:19.586570 kernel: audit: type=1334 audit(1765567939.580:549): prog-id=161 op=LOAD Dec 12 19:32:19.580000 audit[3442]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3423 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.590504 kernel: audit: type=1300 audit(1765567939.580:549): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3423 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565376465363932313630313065636430366334646464393436313137 Dec 12 19:32:19.595502 kernel: audit: type=1327 audit(1765567939.580:549): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565376465363932313630313065636430366334646464393436313137 Dec 12 19:32:19.580000 audit: BPF prog-id=161 op=UNLOAD Dec 12 19:32:19.580000 audit[3442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3423 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565376465363932313630313065636430366334646464393436313137 Dec 12 19:32:19.580000 audit: BPF prog-id=162 op=LOAD Dec 12 19:32:19.580000 audit[3442]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3423 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565376465363932313630313065636430366334646464393436313137 Dec 12 19:32:19.580000 audit: BPF prog-id=163 op=LOAD Dec 12 19:32:19.580000 audit[3442]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3423 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565376465363932313630313065636430366334646464393436313137 Dec 12 19:32:19.580000 audit: BPF prog-id=163 op=UNLOAD Dec 12 19:32:19.580000 audit[3442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3423 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565376465363932313630313065636430366334646464393436313137 Dec 12 19:32:19.581000 audit: BPF prog-id=162 op=UNLOAD Dec 12 19:32:19.581000 audit[3442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3423 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565376465363932313630313065636430366334646464393436313137 Dec 12 19:32:19.581000 audit: BPF prog-id=164 op=LOAD Dec 12 19:32:19.581000 audit[3442]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3423 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565376465363932313630313065636430366334646464393436313137 Dec 12 19:32:19.605393 kubelet[2938]: E1212 19:32:19.605302 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.605393 kubelet[2938]: W1212 19:32:19.605328 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.605393 kubelet[2938]: E1212 19:32:19.605362 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.606919 kubelet[2938]: E1212 19:32:19.606870 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.606919 kubelet[2938]: W1212 19:32:19.606887 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.606919 kubelet[2938]: E1212 19:32:19.606904 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.611817 kubelet[2938]: E1212 19:32:19.609541 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.611817 kubelet[2938]: W1212 19:32:19.609557 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.611817 kubelet[2938]: E1212 19:32:19.609572 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.611817 kubelet[2938]: E1212 19:32:19.609815 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.611817 kubelet[2938]: W1212 19:32:19.609823 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.611817 kubelet[2938]: E1212 19:32:19.609832 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.611817 kubelet[2938]: E1212 19:32:19.610044 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.611817 kubelet[2938]: W1212 19:32:19.610052 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.611817 kubelet[2938]: E1212 19:32:19.610074 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.611817 kubelet[2938]: E1212 19:32:19.611497 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.612171 kubelet[2938]: W1212 19:32:19.611510 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.612171 kubelet[2938]: E1212 19:32:19.611522 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.612171 kubelet[2938]: E1212 19:32:19.611725 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.612171 kubelet[2938]: W1212 19:32:19.611733 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.612171 kubelet[2938]: E1212 19:32:19.611741 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.612620 kubelet[2938]: E1212 19:32:19.612478 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.612620 kubelet[2938]: W1212 19:32:19.612490 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.612620 kubelet[2938]: E1212 19:32:19.612509 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.613548 kubelet[2938]: E1212 19:32:19.612799 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.613548 kubelet[2938]: W1212 19:32:19.612809 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.613548 kubelet[2938]: E1212 19:32:19.612819 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.613874 kubelet[2938]: E1212 19:32:19.613863 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.613943 kubelet[2938]: W1212 19:32:19.613933 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.614101 kubelet[2938]: E1212 19:32:19.614091 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.615656 kubelet[2938]: E1212 19:32:19.615641 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.615764 kubelet[2938]: W1212 19:32:19.615753 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.615825 kubelet[2938]: E1212 19:32:19.615811 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.616140 kubelet[2938]: E1212 19:32:19.616128 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.616216 kubelet[2938]: W1212 19:32:19.616202 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.616410 kubelet[2938]: E1212 19:32:19.616267 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.617701 kubelet[2938]: E1212 19:32:19.617686 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.617869 kubelet[2938]: W1212 19:32:19.617778 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.617869 kubelet[2938]: E1212 19:32:19.617794 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.618651 kubelet[2938]: E1212 19:32:19.618611 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.618651 kubelet[2938]: W1212 19:32:19.618625 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.618651 kubelet[2938]: E1212 19:32:19.618638 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.619715 kubelet[2938]: E1212 19:32:19.619668 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.619715 kubelet[2938]: W1212 19:32:19.619689 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.619715 kubelet[2938]: E1212 19:32:19.619701 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.620109 kubelet[2938]: E1212 19:32:19.620059 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.620109 kubelet[2938]: W1212 19:32:19.620080 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.620109 kubelet[2938]: E1212 19:32:19.620091 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.621583 kubelet[2938]: E1212 19:32:19.621541 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.621583 kubelet[2938]: W1212 19:32:19.621557 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.621583 kubelet[2938]: E1212 19:32:19.621569 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.622195 kubelet[2938]: E1212 19:32:19.622155 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.622195 kubelet[2938]: W1212 19:32:19.622169 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.622195 kubelet[2938]: E1212 19:32:19.622181 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.624350 kubelet[2938]: E1212 19:32:19.624306 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.624350 kubelet[2938]: W1212 19:32:19.624321 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.624350 kubelet[2938]: E1212 19:32:19.624338 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.624790 kubelet[2938]: E1212 19:32:19.624751 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.624790 kubelet[2938]: W1212 19:32:19.624763 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.624790 kubelet[2938]: E1212 19:32:19.624777 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.625175 kubelet[2938]: E1212 19:32:19.625162 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.625287 kubelet[2938]: W1212 19:32:19.625234 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.625287 kubelet[2938]: E1212 19:32:19.625249 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.626717 kubelet[2938]: E1212 19:32:19.626625 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.626717 kubelet[2938]: W1212 19:32:19.626638 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.626717 kubelet[2938]: E1212 19:32:19.626650 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.627050 kubelet[2938]: E1212 19:32:19.626933 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.627050 kubelet[2938]: W1212 19:32:19.626944 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.627050 kubelet[2938]: E1212 19:32:19.626954 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.627759 kubelet[2938]: E1212 19:32:19.627747 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.628516 kubelet[2938]: W1212 19:32:19.628497 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.628672 kubelet[2938]: E1212 19:32:19.628595 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.628954 kubelet[2938]: E1212 19:32:19.628912 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.628954 kubelet[2938]: W1212 19:32:19.628923 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.628954 kubelet[2938]: E1212 19:32:19.628934 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.656642 kubelet[2938]: E1212 19:32:19.656408 2938 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 19:32:19.656642 kubelet[2938]: W1212 19:32:19.656527 2938 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 19:32:19.656642 kubelet[2938]: E1212 19:32:19.656570 2938 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 19:32:19.668166 systemd[1]: Started cri-containerd-b3ff3e297cdd057ebf1826e250d8cdef3e324cd7bd379059b88600fa3cef38e5.scope - libcontainer container b3ff3e297cdd057ebf1826e250d8cdef3e324cd7bd379059b88600fa3cef38e5. Dec 12 19:32:19.698000 audit: BPF prog-id=165 op=LOAD Dec 12 19:32:19.699000 audit: BPF prog-id=166 op=LOAD Dec 12 19:32:19.699000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3517 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233666633653239376364643035376562663138323665323530643863 Dec 12 19:32:19.699000 audit: BPF prog-id=166 op=UNLOAD Dec 12 19:32:19.699000 audit[3536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3517 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233666633653239376364643035376562663138323665323530643863 Dec 12 19:32:19.699000 audit: BPF prog-id=167 op=LOAD Dec 12 19:32:19.699000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3517 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233666633653239376364643035376562663138323665323530643863 Dec 12 19:32:19.700000 audit: BPF prog-id=168 op=LOAD Dec 12 19:32:19.700000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3517 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233666633653239376364643035376562663138323665323530643863 Dec 12 19:32:19.700000 audit: BPF prog-id=168 op=UNLOAD Dec 12 19:32:19.700000 audit[3536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3517 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233666633653239376364643035376562663138323665323530643863 Dec 12 19:32:19.700000 audit: BPF prog-id=167 op=UNLOAD Dec 12 19:32:19.700000 audit[3536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3517 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233666633653239376364643035376562663138323665323530643863 Dec 12 19:32:19.700000 audit: BPF prog-id=169 op=LOAD Dec 12 19:32:19.700000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3517 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:19.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233666633653239376364643035376562663138323665323530643863 Dec 12 19:32:19.739383 containerd[1670]: time="2025-12-12T19:32:19.739019544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-747jf,Uid:c26622b9-43bd-40a8-a6f4-d43eb5a248d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3ff3e297cdd057ebf1826e250d8cdef3e324cd7bd379059b88600fa3cef38e5\"" Dec 12 19:32:19.743958 containerd[1670]: time="2025-12-12T19:32:19.743802038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 19:32:19.779970 containerd[1670]: time="2025-12-12T19:32:19.779832870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6554768d75-fs49s,Uid:6bd39512-9981-4271-b926-dde81d56d85c,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee7de69216010ecd06c4ddd94611756f7ffd4742cb702e27e006526cffa6c914\"" Dec 12 19:32:20.021000 audit[3592]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3592 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:20.021000 audit[3592]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcf4baf9e0 a2=0 a3=7ffcf4baf9cc items=0 ppid=3062 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:20.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:20.025000 audit[3592]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3592 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:20.025000 audit[3592]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf4baf9e0 a2=0 a3=0 items=0 ppid=3062 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:20.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:21.211351 kubelet[2938]: E1212 19:32:21.211297 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:32:21.234595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677451834.mount: Deactivated successfully. Dec 12 19:32:21.334475 containerd[1670]: time="2025-12-12T19:32:21.334371745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:32:21.335113 containerd[1670]: time="2025-12-12T19:32:21.334920077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 12 19:32:21.335735 containerd[1670]: time="2025-12-12T19:32:21.335710331Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:32:21.337790 containerd[1670]: time="2025-12-12T19:32:21.337482578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:32:21.338190 containerd[1670]: time="2025-12-12T19:32:21.338160672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.594313139s" Dec 12 19:32:21.338252 containerd[1670]: time="2025-12-12T19:32:21.338195137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 19:32:21.339609 containerd[1670]: time="2025-12-12T19:32:21.339577700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 19:32:21.341815 containerd[1670]: time="2025-12-12T19:32:21.341746988Z" level=info msg="CreateContainer within sandbox \"b3ff3e297cdd057ebf1826e250d8cdef3e324cd7bd379059b88600fa3cef38e5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 19:32:21.368848 containerd[1670]: time="2025-12-12T19:32:21.368808214Z" level=info msg="Container 13cb95a51d30c52ee98c12f934ad9a3e877c2662ab6fd2f5aaa0c9dc29fb48cd: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:32:21.389572 containerd[1670]: time="2025-12-12T19:32:21.389520359Z" level=info msg="CreateContainer within sandbox \"b3ff3e297cdd057ebf1826e250d8cdef3e324cd7bd379059b88600fa3cef38e5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"13cb95a51d30c52ee98c12f934ad9a3e877c2662ab6fd2f5aaa0c9dc29fb48cd\"" Dec 12 19:32:21.394985 containerd[1670]: time="2025-12-12T19:32:21.394220547Z" level=info msg="StartContainer for \"13cb95a51d30c52ee98c12f934ad9a3e877c2662ab6fd2f5aaa0c9dc29fb48cd\"" Dec 12 19:32:21.397999 containerd[1670]: time="2025-12-12T19:32:21.397962080Z" level=info msg="connecting to shim 13cb95a51d30c52ee98c12f934ad9a3e877c2662ab6fd2f5aaa0c9dc29fb48cd" address="unix:///run/containerd/s/4429146e898965112139654d8a100012215e0b5f4f64686bb314eee27522fca8" protocol=ttrpc version=3 Dec 12 19:32:21.422683 systemd[1]: Started cri-containerd-13cb95a51d30c52ee98c12f934ad9a3e877c2662ab6fd2f5aaa0c9dc29fb48cd.scope - libcontainer container 13cb95a51d30c52ee98c12f934ad9a3e877c2662ab6fd2f5aaa0c9dc29fb48cd. Dec 12 19:32:21.483000 audit: BPF prog-id=170 op=LOAD Dec 12 19:32:21.483000 audit[3601]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3517 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:21.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133636239356135316433306335326565393863313266393334616439 Dec 12 19:32:21.483000 audit: BPF prog-id=171 op=LOAD Dec 12 19:32:21.483000 audit[3601]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3517 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:21.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133636239356135316433306335326565393863313266393334616439 Dec 12 19:32:21.483000 audit: BPF prog-id=171 op=UNLOAD Dec 12 19:32:21.483000 audit[3601]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3517 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:21.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133636239356135316433306335326565393863313266393334616439 Dec 12 19:32:21.483000 audit: BPF prog-id=170 op=UNLOAD Dec 12 19:32:21.483000 audit[3601]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3517 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:21.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133636239356135316433306335326565393863313266393334616439 Dec 12 19:32:21.483000 audit: BPF prog-id=172 op=LOAD Dec 12 19:32:21.483000 audit[3601]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3517 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:21.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133636239356135316433306335326565393863313266393334616439 Dec 12 19:32:21.523679 systemd[1]: cri-containerd-13cb95a51d30c52ee98c12f934ad9a3e877c2662ab6fd2f5aaa0c9dc29fb48cd.scope: Deactivated successfully. Dec 12 19:32:21.525159 containerd[1670]: time="2025-12-12T19:32:21.525125394Z" level=info msg="StartContainer for \"13cb95a51d30c52ee98c12f934ad9a3e877c2662ab6fd2f5aaa0c9dc29fb48cd\" returns successfully" Dec 12 19:32:21.525000 audit: BPF prog-id=172 op=UNLOAD Dec 12 19:32:21.527938 containerd[1670]: time="2025-12-12T19:32:21.527870836Z" level=info msg="received container exit event container_id:\"13cb95a51d30c52ee98c12f934ad9a3e877c2662ab6fd2f5aaa0c9dc29fb48cd\" id:\"13cb95a51d30c52ee98c12f934ad9a3e877c2662ab6fd2f5aaa0c9dc29fb48cd\" pid:3614 exited_at:{seconds:1765567941 nanos:527520073}" Dec 12 19:32:21.558216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13cb95a51d30c52ee98c12f934ad9a3e877c2662ab6fd2f5aaa0c9dc29fb48cd-rootfs.mount: Deactivated successfully. Dec 12 19:32:23.212332 kubelet[2938]: E1212 19:32:23.212038 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:32:24.023728 containerd[1670]: time="2025-12-12T19:32:24.023649567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:32:24.025473 containerd[1670]: time="2025-12-12T19:32:24.025408072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 12 19:32:24.026085 containerd[1670]: time="2025-12-12T19:32:24.026041010Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:32:24.028327 containerd[1670]: time="2025-12-12T19:32:24.028277634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:32:24.029138 containerd[1670]: time="2025-12-12T19:32:24.029098957Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.688914362s" Dec 12 19:32:24.029138 containerd[1670]: time="2025-12-12T19:32:24.029133828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 19:32:24.032868 containerd[1670]: time="2025-12-12T19:32:24.032837379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 19:32:24.088742 containerd[1670]: time="2025-12-12T19:32:24.088701054Z" level=info msg="CreateContainer within sandbox \"ee7de69216010ecd06c4ddd94611756f7ffd4742cb702e27e006526cffa6c914\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 19:32:24.100575 containerd[1670]: time="2025-12-12T19:32:24.100514094Z" level=info msg="Container de2355a7967d2e1847b5f9328832d9468d0973ca02486877911bf8f0dfdd53b7: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:32:24.110044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3249400979.mount: Deactivated successfully. Dec 12 19:32:24.112629 containerd[1670]: time="2025-12-12T19:32:24.112597738Z" level=info msg="CreateContainer within sandbox \"ee7de69216010ecd06c4ddd94611756f7ffd4742cb702e27e006526cffa6c914\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"de2355a7967d2e1847b5f9328832d9468d0973ca02486877911bf8f0dfdd53b7\"" Dec 12 19:32:24.114583 containerd[1670]: time="2025-12-12T19:32:24.114553539Z" level=info msg="StartContainer for \"de2355a7967d2e1847b5f9328832d9468d0973ca02486877911bf8f0dfdd53b7\"" Dec 12 19:32:24.117420 containerd[1670]: time="2025-12-12T19:32:24.117394444Z" level=info msg="connecting to shim de2355a7967d2e1847b5f9328832d9468d0973ca02486877911bf8f0dfdd53b7" address="unix:///run/containerd/s/cc0877343a24281aa140b89ccee02f8b274faac75f1c1007b1fca4b58e5ae49f" protocol=ttrpc version=3 Dec 12 19:32:24.148722 systemd[1]: Started cri-containerd-de2355a7967d2e1847b5f9328832d9468d0973ca02486877911bf8f0dfdd53b7.scope - libcontainer container de2355a7967d2e1847b5f9328832d9468d0973ca02486877911bf8f0dfdd53b7. Dec 12 19:32:24.181620 kernel: kauditd_printk_skb: 62 callbacks suppressed Dec 12 19:32:24.181869 kernel: audit: type=1334 audit(1765567944.173:572): prog-id=173 op=LOAD Dec 12 19:32:24.173000 audit: BPF prog-id=173 op=LOAD Dec 12 19:32:24.183000 audit: BPF prog-id=174 op=LOAD Dec 12 19:32:24.187454 kernel: audit: type=1334 audit(1765567944.183:573): prog-id=174 op=LOAD Dec 12 19:32:24.192181 kernel: audit: type=1300 audit(1765567944.183:573): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3423 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:24.183000 audit[3655]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3423 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:24.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323335356137393637643265313834376235663933323838333264 Dec 12 19:32:24.197471 kernel: audit: type=1327 audit(1765567944.183:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323335356137393637643265313834376235663933323838333264 Dec 12 19:32:24.183000 audit: BPF prog-id=174 op=UNLOAD Dec 12 19:32:24.183000 audit[3655]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3423 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:24.201636 kernel: audit: type=1334 audit(1765567944.183:574): prog-id=174 op=UNLOAD Dec 12 19:32:24.201721 kernel: audit: type=1300 audit(1765567944.183:574): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3423 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:24.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323335356137393637643265313834376235663933323838333264 Dec 12 19:32:24.208488 kernel: audit: type=1327 audit(1765567944.183:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323335356137393637643265313834376235663933323838333264 Dec 12 19:32:24.183000 audit: BPF prog-id=175 op=LOAD Dec 12 19:32:24.183000 audit[3655]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3423 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:24.211775 kernel: audit: type=1334 audit(1765567944.183:575): prog-id=175 op=LOAD Dec 12 19:32:24.211839 kernel: audit: type=1300 audit(1765567944.183:575): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3423 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:24.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323335356137393637643265313834376235663933323838333264 Dec 12 19:32:24.214591 kernel: audit: type=1327 audit(1765567944.183:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323335356137393637643265313834376235663933323838333264 Dec 12 19:32:24.183000 audit: BPF prog-id=176 op=LOAD Dec 12 19:32:24.183000 audit[3655]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3423 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:24.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323335356137393637643265313834376235663933323838333264 Dec 12 19:32:24.183000 audit: BPF prog-id=176 op=UNLOAD Dec 12 19:32:24.183000 audit[3655]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3423 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:24.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323335356137393637643265313834376235663933323838333264 Dec 12 19:32:24.183000 audit: BPF prog-id=175 op=UNLOAD Dec 12 19:32:24.183000 audit[3655]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3423 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:24.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323335356137393637643265313834376235663933323838333264 Dec 12 19:32:24.184000 audit: BPF prog-id=177 op=LOAD Dec 12 19:32:24.184000 audit[3655]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3423 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:24.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465323335356137393637643265313834376235663933323838333264 Dec 12 19:32:24.260768 containerd[1670]: time="2025-12-12T19:32:24.260652887Z" level=info msg="StartContainer for \"de2355a7967d2e1847b5f9328832d9468d0973ca02486877911bf8f0dfdd53b7\" returns successfully" Dec 12 19:32:24.527767 kubelet[2938]: I1212 19:32:24.525572 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6554768d75-fs49s" podStartSLOduration=2.275326864 podStartE2EDuration="6.52554947s" podCreationTimestamp="2025-12-12 19:32:18 +0000 UTC" firstStartedPulling="2025-12-12 19:32:19.782359599 +0000 UTC m=+24.800511498" lastFinishedPulling="2025-12-12 19:32:24.032582207 +0000 UTC m=+29.050734104" observedRunningTime="2025-12-12 19:32:24.515727668 +0000 UTC m=+29.533879602" watchObservedRunningTime="2025-12-12 19:32:24.52554947 +0000 UTC m=+29.543701388" Dec 12 19:32:25.213304 kubelet[2938]: E1212 19:32:25.213187 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:32:25.483852 kubelet[2938]: I1212 19:32:25.483736 2938 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 19:32:27.222969 kubelet[2938]: E1212 19:32:27.222885 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:32:29.216928 kubelet[2938]: E1212 19:32:29.216811 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:32:31.212687 kubelet[2938]: E1212 19:32:31.212388 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:32:31.322782 containerd[1670]: time="2025-12-12T19:32:31.322696108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:32:31.324225 containerd[1670]: time="2025-12-12T19:32:31.324182337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 12 19:32:31.324512 containerd[1670]: time="2025-12-12T19:32:31.324354007Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:32:31.327357 containerd[1670]: time="2025-12-12T19:32:31.327304807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:32:31.328780 containerd[1670]: time="2025-12-12T19:32:31.328594583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 7.295728681s" Dec 12 19:32:31.328780 containerd[1670]: time="2025-12-12T19:32:31.328633070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 19:32:31.335982 containerd[1670]: time="2025-12-12T19:32:31.335366953Z" level=info msg="CreateContainer within sandbox \"b3ff3e297cdd057ebf1826e250d8cdef3e324cd7bd379059b88600fa3cef38e5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 19:32:31.349709 containerd[1670]: time="2025-12-12T19:32:31.349631222Z" level=info msg="Container 924733f594a202be0c5bcaf5737da635bd34232f41e4d184545111cb3b05136d: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:32:31.364005 containerd[1670]: time="2025-12-12T19:32:31.363876192Z" level=info msg="CreateContainer within sandbox \"b3ff3e297cdd057ebf1826e250d8cdef3e324cd7bd379059b88600fa3cef38e5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"924733f594a202be0c5bcaf5737da635bd34232f41e4d184545111cb3b05136d\"" Dec 12 19:32:31.366031 containerd[1670]: time="2025-12-12T19:32:31.365973571Z" level=info msg="StartContainer for \"924733f594a202be0c5bcaf5737da635bd34232f41e4d184545111cb3b05136d\"" Dec 12 19:32:31.369837 containerd[1670]: time="2025-12-12T19:32:31.369798884Z" level=info msg="connecting to shim 924733f594a202be0c5bcaf5737da635bd34232f41e4d184545111cb3b05136d" address="unix:///run/containerd/s/4429146e898965112139654d8a100012215e0b5f4f64686bb314eee27522fca8" protocol=ttrpc version=3 Dec 12 19:32:31.404109 systemd[1]: Started cri-containerd-924733f594a202be0c5bcaf5737da635bd34232f41e4d184545111cb3b05136d.scope - libcontainer container 924733f594a202be0c5bcaf5737da635bd34232f41e4d184545111cb3b05136d. Dec 12 19:32:31.463000 audit: BPF prog-id=178 op=LOAD Dec 12 19:32:31.471299 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 12 19:32:31.471482 kernel: audit: type=1334 audit(1765567951.463:580): prog-id=178 op=LOAD Dec 12 19:32:31.472197 kernel: audit: type=1300 audit(1765567951.463:580): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3517 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:31.463000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3517 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:31.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932343733336635393461323032626530633562636166353733376461 Dec 12 19:32:31.475958 kernel: audit: type=1327 audit(1765567951.463:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932343733336635393461323032626530633562636166353733376461 Dec 12 19:32:31.464000 audit: BPF prog-id=179 op=LOAD Dec 12 19:32:31.464000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3517 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:31.481036 kernel: audit: type=1334 audit(1765567951.464:581): prog-id=179 op=LOAD Dec 12 19:32:31.481106 kernel: audit: type=1300 audit(1765567951.464:581): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3517 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:31.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932343733336635393461323032626530633562636166353733376461 Dec 12 19:32:31.484277 kernel: audit: type=1327 audit(1765567951.464:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932343733336635393461323032626530633562636166353733376461 Dec 12 19:32:31.464000 audit: BPF prog-id=179 op=UNLOAD Dec 12 19:32:31.486576 kernel: audit: type=1334 audit(1765567951.464:582): prog-id=179 op=UNLOAD Dec 12 19:32:31.464000 audit[3703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3517 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:31.490832 kernel: audit: type=1300 audit(1765567951.464:582): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3517 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:31.490960 kernel: audit: type=1327 audit(1765567951.464:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932343733336635393461323032626530633562636166353733376461 Dec 12 19:32:31.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932343733336635393461323032626530633562636166353733376461 Dec 12 19:32:31.464000 audit: BPF prog-id=178 op=UNLOAD Dec 12 19:32:31.464000 audit[3703]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3517 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:31.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932343733336635393461323032626530633562636166353733376461 Dec 12 19:32:31.464000 audit: BPF prog-id=180 op=LOAD Dec 12 19:32:31.497022 kernel: audit: type=1334 audit(1765567951.464:583): prog-id=178 op=UNLOAD Dec 12 19:32:31.464000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3517 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:31.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932343733336635393461323032626530633562636166353733376461 Dec 12 19:32:31.582943 containerd[1670]: time="2025-12-12T19:32:31.582842107Z" level=info msg="StartContainer for \"924733f594a202be0c5bcaf5737da635bd34232f41e4d184545111cb3b05136d\" returns successfully" Dec 12 19:32:32.215218 systemd[1]: cri-containerd-924733f594a202be0c5bcaf5737da635bd34232f41e4d184545111cb3b05136d.scope: Deactivated successfully. Dec 12 19:32:32.216769 systemd[1]: cri-containerd-924733f594a202be0c5bcaf5737da635bd34232f41e4d184545111cb3b05136d.scope: Consumed 675ms CPU time, 159.8M memory peak, 10M read from disk, 171.3M written to disk. Dec 12 19:32:32.218000 audit: BPF prog-id=180 op=UNLOAD Dec 12 19:32:32.242633 containerd[1670]: time="2025-12-12T19:32:32.242566383Z" level=info msg="received container exit event container_id:\"924733f594a202be0c5bcaf5737da635bd34232f41e4d184545111cb3b05136d\" id:\"924733f594a202be0c5bcaf5737da635bd34232f41e4d184545111cb3b05136d\" pid:3716 exited_at:{seconds:1765567952 nanos:235310893}" Dec 12 19:32:32.282602 kubelet[2938]: I1212 19:32:32.282498 2938 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 19:32:32.335294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-924733f594a202be0c5bcaf5737da635bd34232f41e4d184545111cb3b05136d-rootfs.mount: Deactivated successfully. Dec 12 19:32:32.397714 systemd[1]: Created slice kubepods-besteffort-poddf03981a_5abc_4477_8152_b7321a5c7ce6.slice - libcontainer container kubepods-besteffort-poddf03981a_5abc_4477_8152_b7321a5c7ce6.slice. Dec 12 19:32:32.412532 systemd[1]: Created slice kubepods-besteffort-pod5344f808_2f57_4103_9d43_e41974952208.slice - libcontainer container kubepods-besteffort-pod5344f808_2f57_4103_9d43_e41974952208.slice. Dec 12 19:32:32.419509 kubelet[2938]: I1212 19:32:32.419423 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5344f808-2f57-4103-9d43-e41974952208-calico-apiserver-certs\") pod \"calico-apiserver-58bf9fd49d-jx9n2\" (UID: \"5344f808-2f57-4103-9d43-e41974952208\") " pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" Dec 12 19:32:32.419852 kubelet[2938]: I1212 19:32:32.419817 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae74493f-92fd-45a3-a4ca-78630ba178f3-config\") pod \"goldmane-666569f655-26fhz\" (UID: \"ae74493f-92fd-45a3-a4ca-78630ba178f3\") " pod="calico-system/goldmane-666569f655-26fhz" Dec 12 19:32:32.419982 kubelet[2938]: I1212 19:32:32.419908 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae74493f-92fd-45a3-a4ca-78630ba178f3-goldmane-ca-bundle\") pod \"goldmane-666569f655-26fhz\" (UID: \"ae74493f-92fd-45a3-a4ca-78630ba178f3\") " pod="calico-system/goldmane-666569f655-26fhz" Dec 12 19:32:32.419982 kubelet[2938]: I1212 19:32:32.419933 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/df03981a-5abc-4477-8152-b7321a5c7ce6-whisker-backend-key-pair\") pod \"whisker-c59d57568-pf57s\" (UID: \"df03981a-5abc-4477-8152-b7321a5c7ce6\") " pod="calico-system/whisker-c59d57568-pf57s" Dec 12 19:32:32.420074 kubelet[2938]: I1212 19:32:32.420064 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92bfb34d-8a18-4385-8d42-b49803a8d3e6-config-volume\") pod \"coredns-674b8bbfcf-2v62p\" (UID: \"92bfb34d-8a18-4385-8d42-b49803a8d3e6\") " pod="kube-system/coredns-674b8bbfcf-2v62p" Dec 12 19:32:32.420155 kubelet[2938]: I1212 19:32:32.420144 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7b75\" (UniqueName: \"kubernetes.io/projected/92bfb34d-8a18-4385-8d42-b49803a8d3e6-kube-api-access-h7b75\") pod \"coredns-674b8bbfcf-2v62p\" (UID: \"92bfb34d-8a18-4385-8d42-b49803a8d3e6\") " pod="kube-system/coredns-674b8bbfcf-2v62p" Dec 12 19:32:32.420243 kubelet[2938]: I1212 19:32:32.420233 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df03981a-5abc-4477-8152-b7321a5c7ce6-whisker-ca-bundle\") pod \"whisker-c59d57568-pf57s\" (UID: \"df03981a-5abc-4477-8152-b7321a5c7ce6\") " pod="calico-system/whisker-c59d57568-pf57s" Dec 12 19:32:32.420372 kubelet[2938]: I1212 19:32:32.420323 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj2pl\" (UniqueName: \"kubernetes.io/projected/5344f808-2f57-4103-9d43-e41974952208-kube-api-access-pj2pl\") pod \"calico-apiserver-58bf9fd49d-jx9n2\" (UID: \"5344f808-2f57-4103-9d43-e41974952208\") " pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" Dec 12 19:32:32.420372 kubelet[2938]: I1212 19:32:32.420343 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sfrt\" (UniqueName: \"kubernetes.io/projected/bde65fa8-758f-4f39-b274-b1238cc0fdac-kube-api-access-2sfrt\") pod \"calico-apiserver-58bf9fd49d-cblcg\" (UID: \"bde65fa8-758f-4f39-b274-b1238cc0fdac\") " pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" Dec 12 19:32:32.420542 kubelet[2938]: I1212 19:32:32.420364 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bde65fa8-758f-4f39-b274-b1238cc0fdac-calico-apiserver-certs\") pod \"calico-apiserver-58bf9fd49d-cblcg\" (UID: \"bde65fa8-758f-4f39-b274-b1238cc0fdac\") " pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" Dec 12 19:32:32.420542 kubelet[2938]: I1212 19:32:32.420494 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ae74493f-92fd-45a3-a4ca-78630ba178f3-goldmane-key-pair\") pod \"goldmane-666569f655-26fhz\" (UID: \"ae74493f-92fd-45a3-a4ca-78630ba178f3\") " pod="calico-system/goldmane-666569f655-26fhz" Dec 12 19:32:32.420542 kubelet[2938]: I1212 19:32:32.420512 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzgz2\" (UniqueName: \"kubernetes.io/projected/ae74493f-92fd-45a3-a4ca-78630ba178f3-kube-api-access-gzgz2\") pod \"goldmane-666569f655-26fhz\" (UID: \"ae74493f-92fd-45a3-a4ca-78630ba178f3\") " pod="calico-system/goldmane-666569f655-26fhz" Dec 12 19:32:32.420753 kubelet[2938]: I1212 19:32:32.420682 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkr88\" (UniqueName: \"kubernetes.io/projected/df03981a-5abc-4477-8152-b7321a5c7ce6-kube-api-access-hkr88\") pod \"whisker-c59d57568-pf57s\" (UID: \"df03981a-5abc-4477-8152-b7321a5c7ce6\") " pod="calico-system/whisker-c59d57568-pf57s" Dec 12 19:32:32.423957 systemd[1]: Created slice kubepods-besteffort-podbde65fa8_758f_4f39_b274_b1238cc0fdac.slice - libcontainer container kubepods-besteffort-podbde65fa8_758f_4f39_b274_b1238cc0fdac.slice. Dec 12 19:32:32.434667 systemd[1]: Created slice kubepods-burstable-pod92bfb34d_8a18_4385_8d42_b49803a8d3e6.slice - libcontainer container kubepods-burstable-pod92bfb34d_8a18_4385_8d42_b49803a8d3e6.slice. Dec 12 19:32:32.448978 systemd[1]: Created slice kubepods-besteffort-podae74493f_92fd_45a3_a4ca_78630ba178f3.slice - libcontainer container kubepods-besteffort-podae74493f_92fd_45a3_a4ca_78630ba178f3.slice. Dec 12 19:32:32.463296 systemd[1]: Created slice kubepods-burstable-pod316e3085_f2ba_4438_937c_b4f18c2a87e3.slice - libcontainer container kubepods-burstable-pod316e3085_f2ba_4438_937c_b4f18c2a87e3.slice. Dec 12 19:32:32.471285 systemd[1]: Created slice kubepods-besteffort-podefa1b34a_a9a0_4c65_83a8_6ddcfdeb4bac.slice - libcontainer container kubepods-besteffort-podefa1b34a_a9a0_4c65_83a8_6ddcfdeb4bac.slice. Dec 12 19:32:32.521647 kubelet[2938]: I1212 19:32:32.521557 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/316e3085-f2ba-4438-937c-b4f18c2a87e3-config-volume\") pod \"coredns-674b8bbfcf-dm7rl\" (UID: \"316e3085-f2ba-4438-937c-b4f18c2a87e3\") " pod="kube-system/coredns-674b8bbfcf-dm7rl" Dec 12 19:32:32.521647 kubelet[2938]: I1212 19:32:32.521646 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac-tigera-ca-bundle\") pod \"calico-kube-controllers-5dd47b96bf-8h6g2\" (UID: \"efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac\") " pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" Dec 12 19:32:32.522254 kubelet[2938]: I1212 19:32:32.521668 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsqn2\" (UniqueName: \"kubernetes.io/projected/efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac-kube-api-access-gsqn2\") pod \"calico-kube-controllers-5dd47b96bf-8h6g2\" (UID: \"efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac\") " pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" Dec 12 19:32:32.522254 kubelet[2938]: I1212 19:32:32.521715 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwfdf\" (UniqueName: \"kubernetes.io/projected/316e3085-f2ba-4438-937c-b4f18c2a87e3-kube-api-access-bwfdf\") pod \"coredns-674b8bbfcf-dm7rl\" (UID: \"316e3085-f2ba-4438-937c-b4f18c2a87e3\") " pod="kube-system/coredns-674b8bbfcf-dm7rl" Dec 12 19:32:32.571649 containerd[1670]: time="2025-12-12T19:32:32.571593243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 19:32:32.705190 containerd[1670]: time="2025-12-12T19:32:32.705131237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c59d57568-pf57s,Uid:df03981a-5abc-4477-8152-b7321a5c7ce6,Namespace:calico-system,Attempt:0,}" Dec 12 19:32:32.731076 containerd[1670]: time="2025-12-12T19:32:32.730906705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bf9fd49d-cblcg,Uid:bde65fa8-758f-4f39-b274-b1238cc0fdac,Namespace:calico-apiserver,Attempt:0,}" Dec 12 19:32:32.790262 containerd[1670]: time="2025-12-12T19:32:32.789327791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bf9fd49d-jx9n2,Uid:5344f808-2f57-4103-9d43-e41974952208,Namespace:calico-apiserver,Attempt:0,}" Dec 12 19:32:32.818464 containerd[1670]: time="2025-12-12T19:32:32.817081263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2v62p,Uid:92bfb34d-8a18-4385-8d42-b49803a8d3e6,Namespace:kube-system,Attempt:0,}" Dec 12 19:32:32.827161 containerd[1670]: time="2025-12-12T19:32:32.789856695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dd47b96bf-8h6g2,Uid:efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac,Namespace:calico-system,Attempt:0,}" Dec 12 19:32:32.829339 containerd[1670]: time="2025-12-12T19:32:32.809310515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-26fhz,Uid:ae74493f-92fd-45a3-a4ca-78630ba178f3,Namespace:calico-system,Attempt:0,}" Dec 12 19:32:32.830583 containerd[1670]: time="2025-12-12T19:32:32.809352759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dm7rl,Uid:316e3085-f2ba-4438-937c-b4f18c2a87e3,Namespace:kube-system,Attempt:0,}" Dec 12 19:32:33.167037 containerd[1670]: time="2025-12-12T19:32:33.166982326Z" level=error msg="Failed to destroy network for sandbox \"eff2ede40cbf3d51137490d0fef41dbe504e29b8d1882b5ca67b927bc827c386\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.167708 containerd[1670]: time="2025-12-12T19:32:33.167501109Z" level=error msg="Failed to destroy network for sandbox \"078f583206ab39e40d3a1cbc49a6a2c4f08271d573aea02b432af69e784574fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.169369 containerd[1670]: time="2025-12-12T19:32:33.169293057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dm7rl,Uid:316e3085-f2ba-4438-937c-b4f18c2a87e3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eff2ede40cbf3d51137490d0fef41dbe504e29b8d1882b5ca67b927bc827c386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.190196 kubelet[2938]: E1212 19:32:33.189900 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eff2ede40cbf3d51137490d0fef41dbe504e29b8d1882b5ca67b927bc827c386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.190196 kubelet[2938]: E1212 19:32:33.190034 2938 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eff2ede40cbf3d51137490d0fef41dbe504e29b8d1882b5ca67b927bc827c386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dm7rl" Dec 12 19:32:33.190196 kubelet[2938]: E1212 19:32:33.190063 2938 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eff2ede40cbf3d51137490d0fef41dbe504e29b8d1882b5ca67b927bc827c386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dm7rl" Dec 12 19:32:33.190427 kubelet[2938]: E1212 19:32:33.190138 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dm7rl_kube-system(316e3085-f2ba-4438-937c-b4f18c2a87e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dm7rl_kube-system(316e3085-f2ba-4438-937c-b4f18c2a87e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eff2ede40cbf3d51137490d0fef41dbe504e29b8d1882b5ca67b927bc827c386\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dm7rl" podUID="316e3085-f2ba-4438-937c-b4f18c2a87e3" Dec 12 19:32:33.195322 containerd[1670]: time="2025-12-12T19:32:33.195103216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dd47b96bf-8h6g2,Uid:efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"078f583206ab39e40d3a1cbc49a6a2c4f08271d573aea02b432af69e784574fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.195823 kubelet[2938]: E1212 19:32:33.195662 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"078f583206ab39e40d3a1cbc49a6a2c4f08271d573aea02b432af69e784574fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.195823 kubelet[2938]: E1212 19:32:33.195720 2938 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"078f583206ab39e40d3a1cbc49a6a2c4f08271d573aea02b432af69e784574fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" Dec 12 19:32:33.195823 kubelet[2938]: E1212 19:32:33.195748 2938 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"078f583206ab39e40d3a1cbc49a6a2c4f08271d573aea02b432af69e784574fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" Dec 12 19:32:33.197583 kubelet[2938]: E1212 19:32:33.195806 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dd47b96bf-8h6g2_calico-system(efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dd47b96bf-8h6g2_calico-system(efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"078f583206ab39e40d3a1cbc49a6a2c4f08271d573aea02b432af69e784574fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" podUID="efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac" Dec 12 19:32:33.200316 containerd[1670]: time="2025-12-12T19:32:33.200271768Z" level=error msg="Failed to destroy network for sandbox \"c19fad98791b274104acb23e273ed1a982079277b8d4d5780456ca2ff9294071\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.201646 containerd[1670]: time="2025-12-12T19:32:33.201310803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2v62p,Uid:92bfb34d-8a18-4385-8d42-b49803a8d3e6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19fad98791b274104acb23e273ed1a982079277b8d4d5780456ca2ff9294071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.201940 kubelet[2938]: E1212 19:32:33.201816 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19fad98791b274104acb23e273ed1a982079277b8d4d5780456ca2ff9294071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.201940 kubelet[2938]: E1212 19:32:33.201870 2938 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19fad98791b274104acb23e273ed1a982079277b8d4d5780456ca2ff9294071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2v62p" Dec 12 19:32:33.201940 kubelet[2938]: E1212 19:32:33.201894 2938 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19fad98791b274104acb23e273ed1a982079277b8d4d5780456ca2ff9294071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2v62p" Dec 12 19:32:33.202557 kubelet[2938]: E1212 19:32:33.201960 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2v62p_kube-system(92bfb34d-8a18-4385-8d42-b49803a8d3e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2v62p_kube-system(92bfb34d-8a18-4385-8d42-b49803a8d3e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c19fad98791b274104acb23e273ed1a982079277b8d4d5780456ca2ff9294071\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2v62p" podUID="92bfb34d-8a18-4385-8d42-b49803a8d3e6" Dec 12 19:32:33.202819 containerd[1670]: time="2025-12-12T19:32:33.202779487Z" level=error msg="Failed to destroy network for sandbox \"3e71fba0adbd8f72593eeeb078cef16ebd3a0055f6dc6e5ababf5d02002cd19e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.204066 containerd[1670]: time="2025-12-12T19:32:33.204028627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bf9fd49d-cblcg,Uid:bde65fa8-758f-4f39-b274-b1238cc0fdac,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e71fba0adbd8f72593eeeb078cef16ebd3a0055f6dc6e5ababf5d02002cd19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.204347 kubelet[2938]: E1212 19:32:33.204225 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e71fba0adbd8f72593eeeb078cef16ebd3a0055f6dc6e5ababf5d02002cd19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.204347 kubelet[2938]: E1212 19:32:33.204337 2938 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e71fba0adbd8f72593eeeb078cef16ebd3a0055f6dc6e5ababf5d02002cd19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" Dec 12 19:32:33.204881 kubelet[2938]: E1212 19:32:33.204593 2938 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e71fba0adbd8f72593eeeb078cef16ebd3a0055f6dc6e5ababf5d02002cd19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" Dec 12 19:32:33.204881 kubelet[2938]: E1212 19:32:33.204673 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58bf9fd49d-cblcg_calico-apiserver(bde65fa8-758f-4f39-b274-b1238cc0fdac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58bf9fd49d-cblcg_calico-apiserver(bde65fa8-758f-4f39-b274-b1238cc0fdac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e71fba0adbd8f72593eeeb078cef16ebd3a0055f6dc6e5ababf5d02002cd19e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" podUID="bde65fa8-758f-4f39-b274-b1238cc0fdac" Dec 12 19:32:33.223420 systemd[1]: Created slice kubepods-besteffort-pod3228a46d_97a1_46c2_a390_a03b4bb70892.slice - libcontainer container kubepods-besteffort-pod3228a46d_97a1_46c2_a390_a03b4bb70892.slice. Dec 12 19:32:33.228460 containerd[1670]: time="2025-12-12T19:32:33.228400180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-klld4,Uid:3228a46d-97a1-46c2-a390-a03b4bb70892,Namespace:calico-system,Attempt:0,}" Dec 12 19:32:33.237924 containerd[1670]: time="2025-12-12T19:32:33.237686957Z" level=error msg="Failed to destroy network for sandbox \"644b7ddd8abf537e0278d296281186fb0bca52e1eed012ebc7190a5bbfce07ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.238873 containerd[1670]: time="2025-12-12T19:32:33.238733698Z" level=error msg="Failed to destroy network for sandbox \"30f5bc30e7710d85a3815028232cfae76abee9981497b5b245b83bc30c720ef9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.240537 containerd[1670]: time="2025-12-12T19:32:33.240487797Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bf9fd49d-jx9n2,Uid:5344f808-2f57-4103-9d43-e41974952208,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"644b7ddd8abf537e0278d296281186fb0bca52e1eed012ebc7190a5bbfce07ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.241685 kubelet[2938]: E1212 19:32:33.241625 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"644b7ddd8abf537e0278d296281186fb0bca52e1eed012ebc7190a5bbfce07ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.243492 kubelet[2938]: E1212 19:32:33.241724 2938 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"644b7ddd8abf537e0278d296281186fb0bca52e1eed012ebc7190a5bbfce07ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" Dec 12 19:32:33.243492 kubelet[2938]: E1212 19:32:33.241766 2938 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"644b7ddd8abf537e0278d296281186fb0bca52e1eed012ebc7190a5bbfce07ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" Dec 12 19:32:33.243492 kubelet[2938]: E1212 19:32:33.241864 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58bf9fd49d-jx9n2_calico-apiserver(5344f808-2f57-4103-9d43-e41974952208)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58bf9fd49d-jx9n2_calico-apiserver(5344f808-2f57-4103-9d43-e41974952208)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"644b7ddd8abf537e0278d296281186fb0bca52e1eed012ebc7190a5bbfce07ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" podUID="5344f808-2f57-4103-9d43-e41974952208" Dec 12 19:32:33.249123 containerd[1670]: time="2025-12-12T19:32:33.249014714Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c59d57568-pf57s,Uid:df03981a-5abc-4477-8152-b7321a5c7ce6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f5bc30e7710d85a3815028232cfae76abee9981497b5b245b83bc30c720ef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.250952 kubelet[2938]: E1212 19:32:33.250662 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f5bc30e7710d85a3815028232cfae76abee9981497b5b245b83bc30c720ef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.250952 kubelet[2938]: E1212 19:32:33.250753 2938 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f5bc30e7710d85a3815028232cfae76abee9981497b5b245b83bc30c720ef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c59d57568-pf57s" Dec 12 19:32:33.250952 kubelet[2938]: E1212 19:32:33.250780 2938 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30f5bc30e7710d85a3815028232cfae76abee9981497b5b245b83bc30c720ef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c59d57568-pf57s" Dec 12 19:32:33.251154 kubelet[2938]: E1212 19:32:33.250870 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c59d57568-pf57s_calico-system(df03981a-5abc-4477-8152-b7321a5c7ce6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c59d57568-pf57s_calico-system(df03981a-5abc-4477-8152-b7321a5c7ce6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30f5bc30e7710d85a3815028232cfae76abee9981497b5b245b83bc30c720ef9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c59d57568-pf57s" podUID="df03981a-5abc-4477-8152-b7321a5c7ce6" Dec 12 19:32:33.260637 containerd[1670]: time="2025-12-12T19:32:33.260450710Z" level=error msg="Failed to destroy network for sandbox \"8e273741cdbdc0c335fa9240c29e3a753232fb07f7d7c3a86c90142145ad107a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.262089 containerd[1670]: time="2025-12-12T19:32:33.262041504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-26fhz,Uid:ae74493f-92fd-45a3-a4ca-78630ba178f3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e273741cdbdc0c335fa9240c29e3a753232fb07f7d7c3a86c90142145ad107a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.262873 kubelet[2938]: E1212 19:32:33.262581 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e273741cdbdc0c335fa9240c29e3a753232fb07f7d7c3a86c90142145ad107a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.262873 kubelet[2938]: E1212 19:32:33.262674 2938 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e273741cdbdc0c335fa9240c29e3a753232fb07f7d7c3a86c90142145ad107a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-26fhz" Dec 12 19:32:33.262873 kubelet[2938]: E1212 19:32:33.262714 2938 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e273741cdbdc0c335fa9240c29e3a753232fb07f7d7c3a86c90142145ad107a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-26fhz" Dec 12 19:32:33.263399 kubelet[2938]: E1212 19:32:33.262798 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-26fhz_calico-system(ae74493f-92fd-45a3-a4ca-78630ba178f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-26fhz_calico-system(ae74493f-92fd-45a3-a4ca-78630ba178f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e273741cdbdc0c335fa9240c29e3a753232fb07f7d7c3a86c90142145ad107a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-26fhz" podUID="ae74493f-92fd-45a3-a4ca-78630ba178f3" Dec 12 19:32:33.333591 containerd[1670]: time="2025-12-12T19:32:33.333515707Z" level=error msg="Failed to destroy network for sandbox \"2ed587b007fecaa099b93c3074b6b4665c4d951fc6611f2427278a980c2ff4a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.336148 containerd[1670]: time="2025-12-12T19:32:33.336061520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-klld4,Uid:3228a46d-97a1-46c2-a390-a03b4bb70892,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ed587b007fecaa099b93c3074b6b4665c4d951fc6611f2427278a980c2ff4a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.336812 kubelet[2938]: E1212 19:32:33.336768 2938 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ed587b007fecaa099b93c3074b6b4665c4d951fc6611f2427278a980c2ff4a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 19:32:33.337319 kubelet[2938]: E1212 19:32:33.336841 2938 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ed587b007fecaa099b93c3074b6b4665c4d951fc6611f2427278a980c2ff4a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-klld4" Dec 12 19:32:33.337319 kubelet[2938]: E1212 19:32:33.336875 2938 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ed587b007fecaa099b93c3074b6b4665c4d951fc6611f2427278a980c2ff4a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-klld4" Dec 12 19:32:33.337319 kubelet[2938]: E1212 19:32:33.336937 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-klld4_calico-system(3228a46d-97a1-46c2-a390-a03b4bb70892)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-klld4_calico-system(3228a46d-97a1-46c2-a390-a03b4bb70892)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ed587b007fecaa099b93c3074b6b4665c4d951fc6611f2427278a980c2ff4a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:32:33.369268 kubelet[2938]: I1212 19:32:33.368761 2938 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 19:32:33.429000 audit[3969]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3969 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:33.429000 audit[3969]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd3abf1d10 a2=0 a3=7ffd3abf1cfc items=0 ppid=3062 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:33.429000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:33.433000 audit[3969]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3969 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:33.433000 audit[3969]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd3abf1d10 a2=0 a3=7ffd3abf1cfc items=0 ppid=3062 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:33.433000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:40.455737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3685150402.mount: Deactivated successfully. Dec 12 19:32:40.519854 containerd[1670]: time="2025-12-12T19:32:40.509198215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:32:40.529570 containerd[1670]: time="2025-12-12T19:32:40.528472366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 12 19:32:40.561859 containerd[1670]: time="2025-12-12T19:32:40.561189579Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:32:40.569693 containerd[1670]: time="2025-12-12T19:32:40.569644297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 19:32:40.585173 containerd[1670]: time="2025-12-12T19:32:40.585094808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.013433178s" Dec 12 19:32:40.585383 containerd[1670]: time="2025-12-12T19:32:40.585366814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 19:32:40.634726 containerd[1670]: time="2025-12-12T19:32:40.634683154Z" level=info msg="CreateContainer within sandbox \"b3ff3e297cdd057ebf1826e250d8cdef3e324cd7bd379059b88600fa3cef38e5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 19:32:40.681675 containerd[1670]: time="2025-12-12T19:32:40.681605535Z" level=info msg="Container 8ae7fffb267e06a8084c18de1dd3152986aa57d1e023092ae0e856e165286de7: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:32:40.683005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990962184.mount: Deactivated successfully. Dec 12 19:32:40.755865 containerd[1670]: time="2025-12-12T19:32:40.755707376Z" level=info msg="CreateContainer within sandbox \"b3ff3e297cdd057ebf1826e250d8cdef3e324cd7bd379059b88600fa3cef38e5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8ae7fffb267e06a8084c18de1dd3152986aa57d1e023092ae0e856e165286de7\"" Dec 12 19:32:40.757258 containerd[1670]: time="2025-12-12T19:32:40.756925949Z" level=info msg="StartContainer for \"8ae7fffb267e06a8084c18de1dd3152986aa57d1e023092ae0e856e165286de7\"" Dec 12 19:32:40.770451 containerd[1670]: time="2025-12-12T19:32:40.770372532Z" level=info msg="connecting to shim 8ae7fffb267e06a8084c18de1dd3152986aa57d1e023092ae0e856e165286de7" address="unix:///run/containerd/s/4429146e898965112139654d8a100012215e0b5f4f64686bb314eee27522fca8" protocol=ttrpc version=3 Dec 12 19:32:40.909766 systemd[1]: Started cri-containerd-8ae7fffb267e06a8084c18de1dd3152986aa57d1e023092ae0e856e165286de7.scope - libcontainer container 8ae7fffb267e06a8084c18de1dd3152986aa57d1e023092ae0e856e165286de7. Dec 12 19:32:40.981000 audit: BPF prog-id=181 op=LOAD Dec 12 19:32:40.986886 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 12 19:32:40.991099 kernel: audit: type=1334 audit(1765567960.981:588): prog-id=181 op=LOAD Dec 12 19:32:40.991193 kernel: audit: type=1300 audit(1765567960.981:588): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000fe488 a2=98 a3=0 items=0 ppid=3517 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:40.981000 audit[3976]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000fe488 a2=98 a3=0 items=0 ppid=3517 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:40.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653766666662323637653036613830383463313864653164643331 Dec 12 19:32:40.998467 kernel: audit: type=1327 audit(1765567960.981:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653766666662323637653036613830383463313864653164643331 Dec 12 19:32:40.987000 audit: BPF prog-id=182 op=LOAD Dec 12 19:32:40.987000 audit[3976]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0000fe218 a2=98 a3=0 items=0 ppid=3517 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:41.001767 kernel: audit: type=1334 audit(1765567960.987:589): prog-id=182 op=LOAD Dec 12 19:32:41.001835 kernel: audit: type=1300 audit(1765567960.987:589): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0000fe218 a2=98 a3=0 items=0 ppid=3517 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:40.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653766666662323637653036613830383463313864653164643331 Dec 12 19:32:41.010213 kernel: audit: type=1327 audit(1765567960.987:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653766666662323637653036613830383463313864653164643331 Dec 12 19:32:41.010307 kernel: audit: type=1334 audit(1765567960.987:590): prog-id=182 op=UNLOAD Dec 12 19:32:41.010705 kernel: audit: type=1300 audit(1765567960.987:590): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3517 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:40.987000 audit: BPF prog-id=182 op=UNLOAD Dec 12 19:32:40.987000 audit[3976]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3517 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:40.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653766666662323637653036613830383463313864653164643331 Dec 12 19:32:41.014563 kernel: audit: type=1327 audit(1765567960.987:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653766666662323637653036613830383463313864653164643331 Dec 12 19:32:41.016569 kernel: audit: type=1334 audit(1765567960.987:591): prog-id=181 op=UNLOAD Dec 12 19:32:40.987000 audit: BPF prog-id=181 op=UNLOAD Dec 12 19:32:40.987000 audit[3976]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3517 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:40.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653766666662323637653036613830383463313864653164643331 Dec 12 19:32:40.988000 audit: BPF prog-id=183 op=LOAD Dec 12 19:32:40.988000 audit[3976]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000fe6e8 a2=98 a3=0 items=0 ppid=3517 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:40.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861653766666662323637653036613830383463313864653164643331 Dec 12 19:32:41.046509 containerd[1670]: time="2025-12-12T19:32:41.046412136Z" level=info msg="StartContainer for \"8ae7fffb267e06a8084c18de1dd3152986aa57d1e023092ae0e856e165286de7\" returns successfully" Dec 12 19:32:41.214640 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 19:32:41.216300 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 19:32:41.646532 kubelet[2938]: I1212 19:32:41.645329 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-747jf" podStartSLOduration=1.800404269 podStartE2EDuration="22.643702249s" podCreationTimestamp="2025-12-12 19:32:19 +0000 UTC" firstStartedPulling="2025-12-12 19:32:19.743447223 +0000 UTC m=+24.761599120" lastFinishedPulling="2025-12-12 19:32:40.586745198 +0000 UTC m=+45.604897100" observedRunningTime="2025-12-12 19:32:41.641961832 +0000 UTC m=+46.660113750" watchObservedRunningTime="2025-12-12 19:32:41.643702249 +0000 UTC m=+46.661854523" Dec 12 19:32:41.720366 kubelet[2938]: I1212 19:32:41.719897 2938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/df03981a-5abc-4477-8152-b7321a5c7ce6-whisker-backend-key-pair\") pod \"df03981a-5abc-4477-8152-b7321a5c7ce6\" (UID: \"df03981a-5abc-4477-8152-b7321a5c7ce6\") " Dec 12 19:32:41.720873 kubelet[2938]: I1212 19:32:41.720713 2938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df03981a-5abc-4477-8152-b7321a5c7ce6-whisker-ca-bundle\") pod \"df03981a-5abc-4477-8152-b7321a5c7ce6\" (UID: \"df03981a-5abc-4477-8152-b7321a5c7ce6\") " Dec 12 19:32:41.721425 kubelet[2938]: I1212 19:32:41.721195 2938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkr88\" (UniqueName: \"kubernetes.io/projected/df03981a-5abc-4477-8152-b7321a5c7ce6-kube-api-access-hkr88\") pod \"df03981a-5abc-4477-8152-b7321a5c7ce6\" (UID: \"df03981a-5abc-4477-8152-b7321a5c7ce6\") " Dec 12 19:32:41.745032 kubelet[2938]: I1212 19:32:41.744912 2938 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df03981a-5abc-4477-8152-b7321a5c7ce6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "df03981a-5abc-4477-8152-b7321a5c7ce6" (UID: "df03981a-5abc-4477-8152-b7321a5c7ce6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 19:32:41.759790 systemd[1]: var-lib-kubelet-pods-df03981a\x2d5abc\x2d4477\x2d8152\x2db7321a5c7ce6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhkr88.mount: Deactivated successfully. Dec 12 19:32:41.763872 kubelet[2938]: I1212 19:32:41.763818 2938 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df03981a-5abc-4477-8152-b7321a5c7ce6-kube-api-access-hkr88" (OuterVolumeSpecName: "kube-api-access-hkr88") pod "df03981a-5abc-4477-8152-b7321a5c7ce6" (UID: "df03981a-5abc-4477-8152-b7321a5c7ce6"). InnerVolumeSpecName "kube-api-access-hkr88". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 19:32:41.774350 systemd[1]: var-lib-kubelet-pods-df03981a\x2d5abc\x2d4477\x2d8152\x2db7321a5c7ce6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 19:32:41.778989 kubelet[2938]: I1212 19:32:41.777782 2938 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df03981a-5abc-4477-8152-b7321a5c7ce6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "df03981a-5abc-4477-8152-b7321a5c7ce6" (UID: "df03981a-5abc-4477-8152-b7321a5c7ce6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 19:32:41.824379 kubelet[2938]: I1212 19:32:41.824259 2938 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hkr88\" (UniqueName: \"kubernetes.io/projected/df03981a-5abc-4477-8152-b7321a5c7ce6-kube-api-access-hkr88\") on node \"srv-i3fa2.gb1.brightbox.com\" DevicePath \"\"" Dec 12 19:32:41.824846 kubelet[2938]: I1212 19:32:41.824828 2938 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/df03981a-5abc-4477-8152-b7321a5c7ce6-whisker-backend-key-pair\") on node \"srv-i3fa2.gb1.brightbox.com\" DevicePath \"\"" Dec 12 19:32:41.825130 kubelet[2938]: I1212 19:32:41.825115 2938 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df03981a-5abc-4477-8152-b7321a5c7ce6-whisker-ca-bundle\") on node \"srv-i3fa2.gb1.brightbox.com\" DevicePath \"\"" Dec 12 19:32:41.920618 systemd[1]: Removed slice kubepods-besteffort-poddf03981a_5abc_4477_8152_b7321a5c7ce6.slice - libcontainer container kubepods-besteffort-poddf03981a_5abc_4477_8152_b7321a5c7ce6.slice. Dec 12 19:32:42.124970 systemd[1]: Created slice kubepods-besteffort-podb2777763_2a14_4bdc_bc89_41885f08913f.slice - libcontainer container kubepods-besteffort-podb2777763_2a14_4bdc_bc89_41885f08913f.slice. Dec 12 19:32:42.231218 kubelet[2938]: I1212 19:32:42.231067 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2777763-2a14-4bdc-bc89-41885f08913f-whisker-ca-bundle\") pod \"whisker-6595c59d9d-t84l2\" (UID: \"b2777763-2a14-4bdc-bc89-41885f08913f\") " pod="calico-system/whisker-6595c59d9d-t84l2" Dec 12 19:32:42.231218 kubelet[2938]: I1212 19:32:42.231125 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7ljs\" (UniqueName: \"kubernetes.io/projected/b2777763-2a14-4bdc-bc89-41885f08913f-kube-api-access-m7ljs\") pod \"whisker-6595c59d9d-t84l2\" (UID: \"b2777763-2a14-4bdc-bc89-41885f08913f\") " pod="calico-system/whisker-6595c59d9d-t84l2" Dec 12 19:32:42.231218 kubelet[2938]: I1212 19:32:42.231155 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b2777763-2a14-4bdc-bc89-41885f08913f-whisker-backend-key-pair\") pod \"whisker-6595c59d9d-t84l2\" (UID: \"b2777763-2a14-4bdc-bc89-41885f08913f\") " pod="calico-system/whisker-6595c59d9d-t84l2" Dec 12 19:32:42.432329 containerd[1670]: time="2025-12-12T19:32:42.432167832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6595c59d9d-t84l2,Uid:b2777763-2a14-4bdc-bc89-41885f08913f,Namespace:calico-system,Attempt:0,}" Dec 12 19:32:42.749005 systemd-networkd[1573]: calibc9ad81493a: Link UP Dec 12 19:32:42.750180 systemd-networkd[1573]: calibc9ad81493a: Gained carrier Dec 12 19:32:42.782808 containerd[1670]: 2025-12-12 19:32:42.484 [INFO][4076] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 19:32:42.782808 containerd[1670]: 2025-12-12 19:32:42.515 [INFO][4076] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i3fa2.gb1.brightbox.com-k8s-whisker--6595c59d9d--t84l2-eth0 whisker-6595c59d9d- calico-system b2777763-2a14-4bdc-bc89-41885f08913f 908 0 2025-12-12 19:32:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6595c59d9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-i3fa2.gb1.brightbox.com whisker-6595c59d9d-t84l2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibc9ad81493a [] [] }} ContainerID="cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" Namespace="calico-system" Pod="whisker-6595c59d9d-t84l2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-whisker--6595c59d9d--t84l2-" Dec 12 19:32:42.782808 containerd[1670]: 2025-12-12 19:32:42.515 [INFO][4076] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" Namespace="calico-system" Pod="whisker-6595c59d9d-t84l2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-whisker--6595c59d9d--t84l2-eth0" Dec 12 19:32:42.782808 containerd[1670]: 2025-12-12 19:32:42.625 [INFO][4085] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" HandleID="k8s-pod-network.cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" Workload="srv--i3fa2.gb1.brightbox.com-k8s-whisker--6595c59d9d--t84l2-eth0" Dec 12 19:32:42.783189 containerd[1670]: 2025-12-12 19:32:42.629 [INFO][4085] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" HandleID="k8s-pod-network.cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" Workload="srv--i3fa2.gb1.brightbox.com-k8s-whisker--6595c59d9d--t84l2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d95d0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-i3fa2.gb1.brightbox.com", "pod":"whisker-6595c59d9d-t84l2", "timestamp":"2025-12-12 19:32:42.625663316 +0000 UTC"}, Hostname:"srv-i3fa2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:32:42.783189 containerd[1670]: 2025-12-12 19:32:42.629 [INFO][4085] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:32:42.783189 containerd[1670]: 2025-12-12 19:32:42.629 [INFO][4085] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:32:42.783189 containerd[1670]: 2025-12-12 19:32:42.630 [INFO][4085] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i3fa2.gb1.brightbox.com' Dec 12 19:32:42.783189 containerd[1670]: 2025-12-12 19:32:42.652 [INFO][4085] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:42.783189 containerd[1670]: 2025-12-12 19:32:42.664 [INFO][4085] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:42.783189 containerd[1670]: 2025-12-12 19:32:42.672 [INFO][4085] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:42.783189 containerd[1670]: 2025-12-12 19:32:42.678 [INFO][4085] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:42.783189 containerd[1670]: 2025-12-12 19:32:42.681 [INFO][4085] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:42.784291 containerd[1670]: 2025-12-12 19:32:42.681 [INFO][4085] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:42.784291 containerd[1670]: 2025-12-12 19:32:42.685 [INFO][4085] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b Dec 12 19:32:42.784291 containerd[1670]: 2025-12-12 19:32:42.692 [INFO][4085] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:42.784291 containerd[1670]: 2025-12-12 19:32:42.715 [INFO][4085] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.65/26] block=192.168.98.64/26 handle="k8s-pod-network.cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:42.784291 containerd[1670]: 2025-12-12 19:32:42.715 [INFO][4085] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.65/26] handle="k8s-pod-network.cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:42.784291 containerd[1670]: 2025-12-12 19:32:42.715 [INFO][4085] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:32:42.784291 containerd[1670]: 2025-12-12 19:32:42.715 [INFO][4085] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.65/26] IPv6=[] ContainerID="cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" HandleID="k8s-pod-network.cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" Workload="srv--i3fa2.gb1.brightbox.com-k8s-whisker--6595c59d9d--t84l2-eth0" Dec 12 19:32:42.784728 containerd[1670]: 2025-12-12 19:32:42.719 [INFO][4076] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" Namespace="calico-system" Pod="whisker-6595c59d9d-t84l2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-whisker--6595c59d9d--t84l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-whisker--6595c59d9d--t84l2-eth0", GenerateName:"whisker-6595c59d9d-", Namespace:"calico-system", SelfLink:"", UID:"b2777763-2a14-4bdc-bc89-41885f08913f", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6595c59d9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"", Pod:"whisker-6595c59d9d-t84l2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.98.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibc9ad81493a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:42.784728 containerd[1670]: 2025-12-12 19:32:42.719 [INFO][4076] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.65/32] ContainerID="cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" Namespace="calico-system" Pod="whisker-6595c59d9d-t84l2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-whisker--6595c59d9d--t84l2-eth0" Dec 12 19:32:42.784887 containerd[1670]: 2025-12-12 19:32:42.720 [INFO][4076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc9ad81493a ContainerID="cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" Namespace="calico-system" Pod="whisker-6595c59d9d-t84l2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-whisker--6595c59d9d--t84l2-eth0" Dec 12 19:32:42.784887 containerd[1670]: 2025-12-12 19:32:42.735 [INFO][4076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" Namespace="calico-system" Pod="whisker-6595c59d9d-t84l2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-whisker--6595c59d9d--t84l2-eth0" Dec 12 19:32:42.784945 containerd[1670]: 2025-12-12 19:32:42.736 [INFO][4076] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" Namespace="calico-system" Pod="whisker-6595c59d9d-t84l2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-whisker--6595c59d9d--t84l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-whisker--6595c59d9d--t84l2-eth0", GenerateName:"whisker-6595c59d9d-", Namespace:"calico-system", SelfLink:"", UID:"b2777763-2a14-4bdc-bc89-41885f08913f", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6595c59d9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b", Pod:"whisker-6595c59d9d-t84l2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.98.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibc9ad81493a", MAC:"f6:46:52:3e:17:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:42.785027 containerd[1670]: 2025-12-12 19:32:42.768 [INFO][4076] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" Namespace="calico-system" Pod="whisker-6595c59d9d-t84l2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-whisker--6595c59d9d--t84l2-eth0" Dec 12 19:32:42.939420 containerd[1670]: time="2025-12-12T19:32:42.939266570Z" level=info msg="connecting to shim cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b" address="unix:///run/containerd/s/9f1f93a8978b0b7ab7c707dc5fc3b2a8b2cb1a99b496f1e8898dbde667bf5e82" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:32:42.972709 systemd[1]: Started cri-containerd-cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b.scope - libcontainer container cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b. Dec 12 19:32:42.987000 audit: BPF prog-id=184 op=LOAD Dec 12 19:32:42.988000 audit: BPF prog-id=185 op=LOAD Dec 12 19:32:42.988000 audit[4139]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4128 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:42.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363653931323734316636653163656137306665343839386537626330 Dec 12 19:32:42.988000 audit: BPF prog-id=185 op=UNLOAD Dec 12 19:32:42.988000 audit[4139]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4128 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:42.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363653931323734316636653163656137306665343839386537626330 Dec 12 19:32:42.989000 audit: BPF prog-id=186 op=LOAD Dec 12 19:32:42.989000 audit[4139]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4128 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:42.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363653931323734316636653163656137306665343839386537626330 Dec 12 19:32:42.989000 audit: BPF prog-id=187 op=LOAD Dec 12 19:32:42.989000 audit[4139]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4128 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:42.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363653931323734316636653163656137306665343839386537626330 Dec 12 19:32:42.989000 audit: BPF prog-id=187 op=UNLOAD Dec 12 19:32:42.989000 audit[4139]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4128 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:42.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363653931323734316636653163656137306665343839386537626330 Dec 12 19:32:42.989000 audit: BPF prog-id=186 op=UNLOAD Dec 12 19:32:42.989000 audit[4139]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4128 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:42.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363653931323734316636653163656137306665343839386537626330 Dec 12 19:32:42.989000 audit: BPF prog-id=188 op=LOAD Dec 12 19:32:42.989000 audit[4139]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4128 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:42.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363653931323734316636653163656137306665343839386537626330 Dec 12 19:32:43.035733 containerd[1670]: time="2025-12-12T19:32:43.035596953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6595c59d9d-t84l2,Uid:b2777763-2a14-4bdc-bc89-41885f08913f,Namespace:calico-system,Attempt:0,} returns sandbox id \"cce912741f6e1cea70fe4898e7bc0f5ddd90a9a93d9e4d4dd200938c0d9e280b\"" Dec 12 19:32:43.053711 containerd[1670]: time="2025-12-12T19:32:43.053670100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 19:32:43.218718 kubelet[2938]: I1212 19:32:43.217850 2938 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df03981a-5abc-4477-8152-b7321a5c7ce6" path="/var/lib/kubelet/pods/df03981a-5abc-4477-8152-b7321a5c7ce6/volumes" Dec 12 19:32:43.383843 containerd[1670]: time="2025-12-12T19:32:43.383781107Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:32:43.384522 containerd[1670]: time="2025-12-12T19:32:43.384464538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 19:32:43.384847 containerd[1670]: time="2025-12-12T19:32:43.384570740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 19:32:43.392554 kubelet[2938]: E1212 19:32:43.386427 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:32:43.392554 kubelet[2938]: E1212 19:32:43.392353 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:32:43.410366 kubelet[2938]: E1212 19:32:43.410267 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:883a52629ac84d8fb872d294f44af2b8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7ljs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6595c59d9d-t84l2_calico-system(b2777763-2a14-4bdc-bc89-41885f08913f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 19:32:43.413712 containerd[1670]: time="2025-12-12T19:32:43.413668685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 19:32:43.725000 audit: BPF prog-id=189 op=LOAD Dec 12 19:32:43.725000 audit[4288]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe0e460b30 a2=98 a3=1fffffffffffffff items=0 ppid=4178 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:43.725000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 19:32:43.726000 audit: BPF prog-id=189 op=UNLOAD Dec 12 19:32:43.726000 audit[4288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe0e460b00 a3=0 items=0 ppid=4178 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:43.726000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 19:32:43.727000 audit: BPF prog-id=190 op=LOAD Dec 12 19:32:43.727000 audit[4288]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe0e460a10 a2=94 a3=3 items=0 ppid=4178 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:43.727000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 19:32:43.729000 audit: BPF prog-id=190 op=UNLOAD Dec 12 19:32:43.729000 audit[4288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe0e460a10 a2=94 a3=3 items=0 ppid=4178 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:43.729000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 19:32:43.729000 audit: BPF prog-id=191 op=LOAD Dec 12 19:32:43.729000 audit[4288]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe0e460a50 a2=94 a3=7ffe0e460c30 items=0 ppid=4178 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:43.729000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 19:32:43.729000 audit: BPF prog-id=191 op=UNLOAD Dec 12 19:32:43.729000 audit[4288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe0e460a50 a2=94 a3=7ffe0e460c30 items=0 ppid=4178 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:43.729000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 19:32:43.730862 containerd[1670]: time="2025-12-12T19:32:43.730826019Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:32:43.731321 containerd[1670]: time="2025-12-12T19:32:43.731273386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 19:32:43.731370 containerd[1670]: time="2025-12-12T19:32:43.731356671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 19:32:43.731659 kubelet[2938]: E1212 19:32:43.731620 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:32:43.731822 kubelet[2938]: E1212 19:32:43.731681 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:32:43.733092 kubelet[2938]: E1212 19:32:43.733019 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7ljs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6595c59d9d-t84l2_calico-system(b2777763-2a14-4bdc-bc89-41885f08913f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 19:32:43.735000 audit: BPF prog-id=192 op=LOAD Dec 12 19:32:43.735000 audit[4292]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffed5c7a080 a2=98 a3=3 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:43.735000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:43.737000 audit: BPF prog-id=192 op=UNLOAD Dec 12 19:32:43.737000 audit[4292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffed5c7a050 a3=0 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:43.737000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:43.738000 audit: BPF prog-id=193 op=LOAD Dec 12 19:32:43.738000 audit[4292]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffed5c79e70 a2=94 a3=54428f items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:43.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:43.740155 kubelet[2938]: E1212 19:32:43.739950 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6595c59d9d-t84l2" podUID="b2777763-2a14-4bdc-bc89-41885f08913f" Dec 12 19:32:43.739000 audit: BPF prog-id=193 op=UNLOAD Dec 12 19:32:43.739000 audit[4292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffed5c79e70 a2=94 a3=54428f items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:43.739000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:43.740000 audit: BPF prog-id=194 op=LOAD Dec 12 19:32:43.740000 audit[4292]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffed5c79ea0 a2=94 a3=2 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:43.740000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:43.740000 audit: BPF prog-id=194 op=UNLOAD Dec 12 19:32:43.740000 audit[4292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffed5c79ea0 a2=0 a3=2 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:43.740000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:44.029000 audit: BPF prog-id=195 op=LOAD Dec 12 19:32:44.029000 audit[4292]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffed5c79d60 a2=94 a3=1 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.029000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:44.030000 audit: BPF prog-id=195 op=UNLOAD Dec 12 19:32:44.030000 audit[4292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffed5c79d60 a2=94 a3=1 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.030000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:44.044000 audit: BPF prog-id=196 op=LOAD Dec 12 19:32:44.044000 audit[4292]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffed5c79d50 a2=94 a3=4 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.044000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:44.045000 audit: BPF prog-id=196 op=UNLOAD Dec 12 19:32:44.045000 audit[4292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffed5c79d50 a2=0 a3=4 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.045000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:44.045000 audit: BPF prog-id=197 op=LOAD Dec 12 19:32:44.045000 audit[4292]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffed5c79bb0 a2=94 a3=5 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.045000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:44.045000 audit: BPF prog-id=197 op=UNLOAD Dec 12 19:32:44.045000 audit[4292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffed5c79bb0 a2=0 a3=5 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.045000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:44.046000 audit: BPF prog-id=198 op=LOAD Dec 12 19:32:44.046000 audit[4292]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffed5c79dd0 a2=94 a3=6 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:44.046000 audit: BPF prog-id=198 op=UNLOAD Dec 12 19:32:44.046000 audit[4292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffed5c79dd0 a2=0 a3=6 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:44.046000 audit: BPF prog-id=199 op=LOAD Dec 12 19:32:44.046000 audit[4292]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffed5c79580 a2=94 a3=88 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:44.047000 audit: BPF prog-id=200 op=LOAD Dec 12 19:32:44.047000 audit[4292]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffed5c79400 a2=94 a3=2 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.047000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:44.047000 audit: BPF prog-id=200 op=UNLOAD Dec 12 19:32:44.047000 audit[4292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffed5c79430 a2=0 a3=7ffed5c79530 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.047000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:44.048000 audit: BPF prog-id=199 op=UNLOAD Dec 12 19:32:44.048000 audit[4292]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=26b5dd10 a2=0 a3=d8ffa30d58626f62 items=0 ppid=4178 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.048000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 19:32:44.058000 audit: BPF prog-id=201 op=LOAD Dec 12 19:32:44.058000 audit[4312]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb11c6320 a2=98 a3=1999999999999999 items=0 ppid=4178 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.058000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 19:32:44.058000 audit: BPF prog-id=201 op=UNLOAD Dec 12 19:32:44.058000 audit[4312]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffb11c62f0 a3=0 items=0 ppid=4178 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.058000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 19:32:44.058000 audit: BPF prog-id=202 op=LOAD Dec 12 19:32:44.058000 audit[4312]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb11c6200 a2=94 a3=ffff items=0 ppid=4178 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.058000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 19:32:44.058000 audit: BPF prog-id=202 op=UNLOAD Dec 12 19:32:44.058000 audit[4312]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffb11c6200 a2=94 a3=ffff items=0 ppid=4178 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.058000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 19:32:44.058000 audit: BPF prog-id=203 op=LOAD Dec 12 19:32:44.058000 audit[4312]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb11c6240 a2=94 a3=7fffb11c6420 items=0 ppid=4178 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.058000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 19:32:44.058000 audit: BPF prog-id=203 op=UNLOAD Dec 12 19:32:44.058000 audit[4312]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffb11c6240 a2=94 a3=7fffb11c6420 items=0 ppid=4178 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.058000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 19:32:44.150332 systemd-networkd[1573]: vxlan.calico: Link UP Dec 12 19:32:44.150863 systemd-networkd[1573]: vxlan.calico: Gained carrier Dec 12 19:32:44.183000 audit: BPF prog-id=204 op=LOAD Dec 12 19:32:44.183000 audit[4337]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe79960340 a2=98 a3=0 items=0 ppid=4178 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 19:32:44.183000 audit: BPF prog-id=204 op=UNLOAD Dec 12 19:32:44.183000 audit[4337]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe79960310 a3=0 items=0 ppid=4178 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.183000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 19:32:44.184000 audit: BPF prog-id=205 op=LOAD Dec 12 19:32:44.184000 audit[4337]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe79960150 a2=94 a3=54428f items=0 ppid=4178 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.184000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 19:32:44.185000 audit: BPF prog-id=205 op=UNLOAD Dec 12 19:32:44.185000 audit[4337]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe79960150 a2=94 a3=54428f items=0 ppid=4178 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.185000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 19:32:44.185000 audit: BPF prog-id=206 op=LOAD Dec 12 19:32:44.185000 audit[4337]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe79960180 a2=94 a3=2 items=0 ppid=4178 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.185000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 19:32:44.185000 audit: BPF prog-id=206 op=UNLOAD Dec 12 19:32:44.185000 audit[4337]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe79960180 a2=0 a3=2 items=0 ppid=4178 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.185000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 19:32:44.185000 audit: BPF prog-id=207 op=LOAD Dec 12 19:32:44.185000 audit[4337]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe7995ff30 a2=94 a3=4 items=0 ppid=4178 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.185000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 19:32:44.185000 audit: BPF prog-id=207 op=UNLOAD Dec 12 19:32:44.185000 audit[4337]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe7995ff30 a2=94 a3=4 items=0 ppid=4178 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.185000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 19:32:44.186000 audit: BPF prog-id=208 op=LOAD Dec 12 19:32:44.186000 audit[4337]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe79960030 a2=94 a3=7ffe799601b0 items=0 ppid=4178 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.186000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 19:32:44.186000 audit: BPF prog-id=208 op=UNLOAD Dec 12 19:32:44.186000 audit[4337]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe79960030 a2=0 a3=7ffe799601b0 items=0 ppid=4178 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.186000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 19:32:44.189000 audit: BPF prog-id=209 op=LOAD Dec 12 19:32:44.189000 audit[4337]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe7995f760 a2=94 a3=2 items=0 ppid=4178 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 19:32:44.189000 audit: BPF prog-id=209 op=UNLOAD Dec 12 19:32:44.189000 audit[4337]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe7995f760 a2=0 a3=2 items=0 ppid=4178 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 19:32:44.189000 audit: BPF prog-id=210 op=LOAD Dec 12 19:32:44.189000 audit[4337]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe7995f860 a2=94 a3=30 items=0 ppid=4178 pid=4337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.189000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 19:32:44.204000 audit: BPF prog-id=211 op=LOAD Dec 12 19:32:44.204000 audit[4345]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc7ee75280 a2=98 a3=0 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.204000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.205000 audit: BPF prog-id=211 op=UNLOAD Dec 12 19:32:44.205000 audit[4345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc7ee75250 a3=0 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.206000 audit: BPF prog-id=212 op=LOAD Dec 12 19:32:44.206000 audit[4345]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc7ee75070 a2=94 a3=54428f items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.206000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.206000 audit: BPF prog-id=212 op=UNLOAD Dec 12 19:32:44.206000 audit[4345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc7ee75070 a2=94 a3=54428f items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.206000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.206000 audit: BPF prog-id=213 op=LOAD Dec 12 19:32:44.206000 audit[4345]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc7ee750a0 a2=94 a3=2 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.206000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.206000 audit: BPF prog-id=213 op=UNLOAD Dec 12 19:32:44.206000 audit[4345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc7ee750a0 a2=0 a3=2 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.206000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.444000 audit: BPF prog-id=214 op=LOAD Dec 12 19:32:44.444000 audit[4345]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc7ee74f60 a2=94 a3=1 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.444000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.445000 audit: BPF prog-id=214 op=UNLOAD Dec 12 19:32:44.445000 audit[4345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc7ee74f60 a2=94 a3=1 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.445000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.459000 audit: BPF prog-id=215 op=LOAD Dec 12 19:32:44.459000 audit[4345]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc7ee74f50 a2=94 a3=4 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.459000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.459000 audit: BPF prog-id=215 op=UNLOAD Dec 12 19:32:44.459000 audit[4345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc7ee74f50 a2=0 a3=4 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.459000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.460000 audit: BPF prog-id=216 op=LOAD Dec 12 19:32:44.460000 audit[4345]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc7ee74db0 a2=94 a3=5 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.460000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.460000 audit: BPF prog-id=216 op=UNLOAD Dec 12 19:32:44.460000 audit[4345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc7ee74db0 a2=0 a3=5 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.460000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.460000 audit: BPF prog-id=217 op=LOAD Dec 12 19:32:44.460000 audit[4345]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc7ee74fd0 a2=94 a3=6 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.460000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.460000 audit: BPF prog-id=217 op=UNLOAD Dec 12 19:32:44.460000 audit[4345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc7ee74fd0 a2=0 a3=6 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.460000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.461000 audit: BPF prog-id=218 op=LOAD Dec 12 19:32:44.461000 audit[4345]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc7ee74780 a2=94 a3=88 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.461000 audit: BPF prog-id=219 op=LOAD Dec 12 19:32:44.461000 audit[4345]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc7ee74600 a2=94 a3=2 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.461000 audit: BPF prog-id=219 op=UNLOAD Dec 12 19:32:44.461000 audit[4345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc7ee74630 a2=0 a3=7ffc7ee74730 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.462000 audit: BPF prog-id=218 op=UNLOAD Dec 12 19:32:44.462000 audit[4345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=36d8dd10 a2=0 a3=e1a5033ddc1baa04 items=0 ppid=4178 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.462000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 19:32:44.471000 audit: BPF prog-id=210 op=UNLOAD Dec 12 19:32:44.471000 audit[4178]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000eaad00 a2=0 a3=0 items=0 ppid=4169 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.471000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 19:32:44.546000 audit[4371]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4371 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 19:32:44.546000 audit[4371]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc8a2f7ea0 a2=0 a3=7ffc8a2f7e8c items=0 ppid=4178 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.546000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 19:32:44.552000 audit[4373]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4373 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 19:32:44.552000 audit[4373]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffdaf2fd230 a2=0 a3=7ffdaf2fd21c items=0 ppid=4178 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.552000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 19:32:44.554000 audit[4374]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4374 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 19:32:44.554000 audit[4374]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe4d08d6d0 a2=0 a3=7ffe4d08d6bc items=0 ppid=4178 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.554000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 19:32:44.572000 audit[4378]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4378 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 19:32:44.572000 audit[4378]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffcf71ca350 a2=0 a3=7ffcf71ca33c items=0 ppid=4178 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.572000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 19:32:44.659399 kubelet[2938]: E1212 19:32:44.659308 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6595c59d9d-t84l2" podUID="b2777763-2a14-4bdc-bc89-41885f08913f" Dec 12 19:32:44.703646 systemd-networkd[1573]: calibc9ad81493a: Gained IPv6LL Dec 12 19:32:44.716000 audit[4388]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:44.716000 audit[4388]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb19b7a80 a2=0 a3=7ffcb19b7a6c items=0 ppid=3062 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:44.721000 audit[4388]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:44.721000 audit[4388]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcb19b7a80 a2=0 a3=0 items=0 ppid=3062 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:44.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:45.216099 systemd-networkd[1573]: vxlan.calico: Gained IPv6LL Dec 12 19:32:45.239132 containerd[1670]: time="2025-12-12T19:32:45.238910107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-klld4,Uid:3228a46d-97a1-46c2-a390-a03b4bb70892,Namespace:calico-system,Attempt:0,}" Dec 12 19:32:45.436596 systemd-networkd[1573]: calied2bafb08f3: Link UP Dec 12 19:32:45.439037 systemd-networkd[1573]: calied2bafb08f3: Gained carrier Dec 12 19:32:45.466844 containerd[1670]: 2025-12-12 19:32:45.305 [INFO][4390] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i3fa2.gb1.brightbox.com-k8s-csi--node--driver--klld4-eth0 csi-node-driver- calico-system 3228a46d-97a1-46c2-a390-a03b4bb70892 722 0 2025-12-12 19:32:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-i3fa2.gb1.brightbox.com csi-node-driver-klld4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calied2bafb08f3 [] [] }} ContainerID="a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" Namespace="calico-system" Pod="csi-node-driver-klld4" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-csi--node--driver--klld4-" Dec 12 19:32:45.466844 containerd[1670]: 2025-12-12 19:32:45.305 [INFO][4390] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" Namespace="calico-system" Pod="csi-node-driver-klld4" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-csi--node--driver--klld4-eth0" Dec 12 19:32:45.466844 containerd[1670]: 2025-12-12 19:32:45.370 [INFO][4404] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" HandleID="k8s-pod-network.a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" Workload="srv--i3fa2.gb1.brightbox.com-k8s-csi--node--driver--klld4-eth0" Dec 12 19:32:45.468611 containerd[1670]: 2025-12-12 19:32:45.370 [INFO][4404] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" HandleID="k8s-pod-network.a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" Workload="srv--i3fa2.gb1.brightbox.com-k8s-csi--node--driver--klld4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-i3fa2.gb1.brightbox.com", "pod":"csi-node-driver-klld4", "timestamp":"2025-12-12 19:32:45.370135748 +0000 UTC"}, Hostname:"srv-i3fa2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:32:45.468611 containerd[1670]: 2025-12-12 19:32:45.370 [INFO][4404] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:32:45.468611 containerd[1670]: 2025-12-12 19:32:45.370 [INFO][4404] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:32:45.468611 containerd[1670]: 2025-12-12 19:32:45.370 [INFO][4404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i3fa2.gb1.brightbox.com' Dec 12 19:32:45.468611 containerd[1670]: 2025-12-12 19:32:45.379 [INFO][4404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:45.468611 containerd[1670]: 2025-12-12 19:32:45.385 [INFO][4404] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:45.468611 containerd[1670]: 2025-12-12 19:32:45.393 [INFO][4404] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:45.468611 containerd[1670]: 2025-12-12 19:32:45.396 [INFO][4404] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:45.468611 containerd[1670]: 2025-12-12 19:32:45.400 [INFO][4404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:45.468890 containerd[1670]: 2025-12-12 19:32:45.400 [INFO][4404] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:45.468890 containerd[1670]: 2025-12-12 19:32:45.403 [INFO][4404] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd Dec 12 19:32:45.468890 containerd[1670]: 2025-12-12 19:32:45.410 [INFO][4404] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:45.468890 containerd[1670]: 2025-12-12 19:32:45.419 [INFO][4404] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.66/26] block=192.168.98.64/26 handle="k8s-pod-network.a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:45.468890 containerd[1670]: 2025-12-12 19:32:45.420 [INFO][4404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.66/26] handle="k8s-pod-network.a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:45.468890 containerd[1670]: 2025-12-12 19:32:45.420 [INFO][4404] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:32:45.468890 containerd[1670]: 2025-12-12 19:32:45.420 [INFO][4404] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.66/26] IPv6=[] ContainerID="a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" HandleID="k8s-pod-network.a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" Workload="srv--i3fa2.gb1.brightbox.com-k8s-csi--node--driver--klld4-eth0" Dec 12 19:32:45.469079 containerd[1670]: 2025-12-12 19:32:45.427 [INFO][4390] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" Namespace="calico-system" Pod="csi-node-driver-klld4" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-csi--node--driver--klld4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-csi--node--driver--klld4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3228a46d-97a1-46c2-a390-a03b4bb70892", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-klld4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calied2bafb08f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:45.469149 containerd[1670]: 2025-12-12 19:32:45.428 [INFO][4390] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.66/32] ContainerID="a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" Namespace="calico-system" Pod="csi-node-driver-klld4" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-csi--node--driver--klld4-eth0" Dec 12 19:32:45.469149 containerd[1670]: 2025-12-12 19:32:45.428 [INFO][4390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied2bafb08f3 ContainerID="a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" Namespace="calico-system" Pod="csi-node-driver-klld4" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-csi--node--driver--klld4-eth0" Dec 12 19:32:45.469149 containerd[1670]: 2025-12-12 19:32:45.438 [INFO][4390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" Namespace="calico-system" Pod="csi-node-driver-klld4" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-csi--node--driver--klld4-eth0" Dec 12 19:32:45.469228 containerd[1670]: 2025-12-12 19:32:45.441 [INFO][4390] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" Namespace="calico-system" Pod="csi-node-driver-klld4" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-csi--node--driver--klld4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-csi--node--driver--klld4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3228a46d-97a1-46c2-a390-a03b4bb70892", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd", Pod:"csi-node-driver-klld4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calied2bafb08f3", MAC:"a2:aa:f4:e2:f2:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:45.469302 containerd[1670]: 2025-12-12 19:32:45.458 [INFO][4390] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" Namespace="calico-system" Pod="csi-node-driver-klld4" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-csi--node--driver--klld4-eth0" Dec 12 19:32:45.527234 containerd[1670]: time="2025-12-12T19:32:45.527178435Z" level=info msg="connecting to shim a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd" address="unix:///run/containerd/s/e57ddbd44890ee45f9a0d31d86a361a40177a0c76582715487550cf66d15ec17" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:32:45.544000 audit[4433]: NETFILTER_CFG table=filter:127 family=2 entries=36 op=nft_register_chain pid=4433 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 19:32:45.544000 audit[4433]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffc17999eb0 a2=0 a3=7ffc17999e9c items=0 ppid=4178 pid=4433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:45.544000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 19:32:45.582928 systemd[1]: Started cri-containerd-a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd.scope - libcontainer container a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd. Dec 12 19:32:45.614000 audit: BPF prog-id=220 op=LOAD Dec 12 19:32:45.615000 audit: BPF prog-id=221 op=LOAD Dec 12 19:32:45.615000 audit[4440]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4426 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:45.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130613061623966343565316539366638313038663432373466643135 Dec 12 19:32:45.615000 audit: BPF prog-id=221 op=UNLOAD Dec 12 19:32:45.615000 audit[4440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4426 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:45.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130613061623966343565316539366638313038663432373466643135 Dec 12 19:32:45.615000 audit: BPF prog-id=222 op=LOAD Dec 12 19:32:45.615000 audit[4440]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4426 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:45.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130613061623966343565316539366638313038663432373466643135 Dec 12 19:32:45.615000 audit: BPF prog-id=223 op=LOAD Dec 12 19:32:45.615000 audit[4440]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4426 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:45.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130613061623966343565316539366638313038663432373466643135 Dec 12 19:32:45.615000 audit: BPF prog-id=223 op=UNLOAD Dec 12 19:32:45.615000 audit[4440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4426 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:45.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130613061623966343565316539366638313038663432373466643135 Dec 12 19:32:45.615000 audit: BPF prog-id=222 op=UNLOAD Dec 12 19:32:45.615000 audit[4440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4426 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:45.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130613061623966343565316539366638313038663432373466643135 Dec 12 19:32:45.615000 audit: BPF prog-id=224 op=LOAD Dec 12 19:32:45.615000 audit[4440]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4426 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:45.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130613061623966343565316539366638313038663432373466643135 Dec 12 19:32:45.637197 containerd[1670]: time="2025-12-12T19:32:45.637115303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-klld4,Uid:3228a46d-97a1-46c2-a390-a03b4bb70892,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0a0ab9f45e1e96f8108f4274fd152db7bd72de1c847ce7491225b71fdc0f2bd\"" Dec 12 19:32:45.640384 containerd[1670]: time="2025-12-12T19:32:45.640294247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 19:32:45.983227 containerd[1670]: time="2025-12-12T19:32:45.982918232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:32:45.984487 containerd[1670]: time="2025-12-12T19:32:45.984360454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 19:32:45.984816 containerd[1670]: time="2025-12-12T19:32:45.984423873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 19:32:45.985233 kubelet[2938]: E1212 19:32:45.985158 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:32:45.985880 kubelet[2938]: E1212 19:32:45.985256 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:32:45.986068 kubelet[2938]: E1212 19:32:45.985998 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pl7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-klld4_calico-system(3228a46d-97a1-46c2-a390-a03b4bb70892): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 19:32:45.990467 containerd[1670]: time="2025-12-12T19:32:45.990373034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 19:32:46.213469 containerd[1670]: time="2025-12-12T19:32:46.213356766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dd47b96bf-8h6g2,Uid:efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac,Namespace:calico-system,Attempt:0,}" Dec 12 19:32:46.214117 containerd[1670]: time="2025-12-12T19:32:46.213934038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bf9fd49d-jx9n2,Uid:5344f808-2f57-4103-9d43-e41974952208,Namespace:calico-apiserver,Attempt:0,}" Dec 12 19:32:46.308746 containerd[1670]: time="2025-12-12T19:32:46.308428371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:32:46.309748 containerd[1670]: time="2025-12-12T19:32:46.309713666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 19:32:46.310206 containerd[1670]: time="2025-12-12T19:32:46.310094008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 19:32:46.310568 kubelet[2938]: E1212 19:32:46.310516 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:32:46.310736 kubelet[2938]: E1212 19:32:46.310589 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:32:46.310775 kubelet[2938]: E1212 19:32:46.310745 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pl7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-klld4_calico-system(3228a46d-97a1-46c2-a390-a03b4bb70892): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 19:32:46.313528 kubelet[2938]: E1212 19:32:46.312243 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:32:46.411338 systemd-networkd[1573]: calie3f808cff64: Link UP Dec 12 19:32:46.412169 systemd-networkd[1573]: calie3f808cff64: Gained carrier Dec 12 19:32:46.431432 containerd[1670]: 2025-12-12 19:32:46.289 [INFO][4467] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i3fa2.gb1.brightbox.com-k8s-calico--kube--controllers--5dd47b96bf--8h6g2-eth0 calico-kube-controllers-5dd47b96bf- calico-system efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac 832 0 2025-12-12 19:32:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5dd47b96bf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-i3fa2.gb1.brightbox.com calico-kube-controllers-5dd47b96bf-8h6g2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie3f808cff64 [] [] }} ContainerID="36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" Namespace="calico-system" Pod="calico-kube-controllers-5dd47b96bf-8h6g2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--kube--controllers--5dd47b96bf--8h6g2-" Dec 12 19:32:46.431432 containerd[1670]: 2025-12-12 19:32:46.290 [INFO][4467] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" Namespace="calico-system" Pod="calico-kube-controllers-5dd47b96bf-8h6g2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--kube--controllers--5dd47b96bf--8h6g2-eth0" Dec 12 19:32:46.431432 containerd[1670]: 2025-12-12 19:32:46.343 [INFO][4491] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" HandleID="k8s-pod-network.36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" Workload="srv--i3fa2.gb1.brightbox.com-k8s-calico--kube--controllers--5dd47b96bf--8h6g2-eth0" Dec 12 19:32:46.433746 containerd[1670]: 2025-12-12 19:32:46.343 [INFO][4491] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" HandleID="k8s-pod-network.36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" Workload="srv--i3fa2.gb1.brightbox.com-k8s-calico--kube--controllers--5dd47b96bf--8h6g2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5290), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-i3fa2.gb1.brightbox.com", "pod":"calico-kube-controllers-5dd47b96bf-8h6g2", "timestamp":"2025-12-12 19:32:46.343586828 +0000 UTC"}, Hostname:"srv-i3fa2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:32:46.433746 containerd[1670]: 2025-12-12 19:32:46.344 [INFO][4491] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:32:46.433746 containerd[1670]: 2025-12-12 19:32:46.344 [INFO][4491] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:32:46.433746 containerd[1670]: 2025-12-12 19:32:46.344 [INFO][4491] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i3fa2.gb1.brightbox.com' Dec 12 19:32:46.433746 containerd[1670]: 2025-12-12 19:32:46.362 [INFO][4491] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.433746 containerd[1670]: 2025-12-12 19:32:46.369 [INFO][4491] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.433746 containerd[1670]: 2025-12-12 19:32:46.376 [INFO][4491] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.433746 containerd[1670]: 2025-12-12 19:32:46.378 [INFO][4491] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.433746 containerd[1670]: 2025-12-12 19:32:46.381 [INFO][4491] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.435299 containerd[1670]: 2025-12-12 19:32:46.381 [INFO][4491] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.435299 containerd[1670]: 2025-12-12 19:32:46.384 [INFO][4491] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983 Dec 12 19:32:46.435299 containerd[1670]: 2025-12-12 19:32:46.389 [INFO][4491] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.435299 containerd[1670]: 2025-12-12 19:32:46.400 [INFO][4491] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.67/26] block=192.168.98.64/26 handle="k8s-pod-network.36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.435299 containerd[1670]: 2025-12-12 19:32:46.400 [INFO][4491] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.67/26] handle="k8s-pod-network.36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.435299 containerd[1670]: 2025-12-12 19:32:46.400 [INFO][4491] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:32:46.435299 containerd[1670]: 2025-12-12 19:32:46.400 [INFO][4491] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.67/26] IPv6=[] ContainerID="36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" HandleID="k8s-pod-network.36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" Workload="srv--i3fa2.gb1.brightbox.com-k8s-calico--kube--controllers--5dd47b96bf--8h6g2-eth0" Dec 12 19:32:46.436136 containerd[1670]: 2025-12-12 19:32:46.405 [INFO][4467] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" Namespace="calico-system" Pod="calico-kube-controllers-5dd47b96bf-8h6g2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--kube--controllers--5dd47b96bf--8h6g2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-calico--kube--controllers--5dd47b96bf--8h6g2-eth0", GenerateName:"calico-kube-controllers-5dd47b96bf-", Namespace:"calico-system", SelfLink:"", UID:"efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dd47b96bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-5dd47b96bf-8h6g2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3f808cff64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:46.436337 containerd[1670]: 2025-12-12 19:32:46.405 [INFO][4467] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.67/32] ContainerID="36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" Namespace="calico-system" Pod="calico-kube-controllers-5dd47b96bf-8h6g2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--kube--controllers--5dd47b96bf--8h6g2-eth0" Dec 12 19:32:46.436337 containerd[1670]: 2025-12-12 19:32:46.405 [INFO][4467] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3f808cff64 ContainerID="36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" Namespace="calico-system" Pod="calico-kube-controllers-5dd47b96bf-8h6g2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--kube--controllers--5dd47b96bf--8h6g2-eth0" Dec 12 19:32:46.436337 containerd[1670]: 2025-12-12 19:32:46.414 [INFO][4467] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" Namespace="calico-system" Pod="calico-kube-controllers-5dd47b96bf-8h6g2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--kube--controllers--5dd47b96bf--8h6g2-eth0" Dec 12 19:32:46.437709 containerd[1670]: 2025-12-12 19:32:46.415 [INFO][4467] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" Namespace="calico-system" Pod="calico-kube-controllers-5dd47b96bf-8h6g2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--kube--controllers--5dd47b96bf--8h6g2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-calico--kube--controllers--5dd47b96bf--8h6g2-eth0", GenerateName:"calico-kube-controllers-5dd47b96bf-", Namespace:"calico-system", SelfLink:"", UID:"efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dd47b96bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983", Pod:"calico-kube-controllers-5dd47b96bf-8h6g2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie3f808cff64", MAC:"fa:c0:09:17:21:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:46.437950 containerd[1670]: 2025-12-12 19:32:46.426 [INFO][4467] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" Namespace="calico-system" Pod="calico-kube-controllers-5dd47b96bf-8h6g2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--kube--controllers--5dd47b96bf--8h6g2-eth0" Dec 12 19:32:46.473466 kernel: kauditd_printk_skb: 256 callbacks suppressed Dec 12 19:32:46.473654 kernel: audit: type=1325 audit(1765567966.464:678): table=filter:128 family=2 entries=46 op=nft_register_chain pid=4512 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 19:32:46.464000 audit[4512]: NETFILTER_CFG table=filter:128 family=2 entries=46 op=nft_register_chain pid=4512 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 19:32:46.464000 audit[4512]: SYSCALL arch=c000003e syscall=46 success=yes exit=23616 a0=3 a1=7ffc52ad0820 a2=0 a3=7ffc52ad080c items=0 ppid=4178 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.482443 kernel: audit: type=1300 audit(1765567966.464:678): arch=c000003e syscall=46 success=yes exit=23616 a0=3 a1=7ffc52ad0820 a2=0 a3=7ffc52ad080c items=0 ppid=4178 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.482549 kernel: audit: type=1327 audit(1765567966.464:678): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 19:32:46.464000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 19:32:46.536476 containerd[1670]: time="2025-12-12T19:32:46.536237998Z" level=info msg="connecting to shim 36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983" address="unix:///run/containerd/s/52cfeeff6f8796899a50668f3f1e5b279db7269aa03a46dd4533de3d8a8d424a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:32:46.581736 systemd[1]: Started cri-containerd-36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983.scope - libcontainer container 36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983. Dec 12 19:32:46.584132 systemd-networkd[1573]: calia8dca136c68: Link UP Dec 12 19:32:46.585667 systemd-networkd[1573]: calia8dca136c68: Gained carrier Dec 12 19:32:46.609314 containerd[1670]: 2025-12-12 19:32:46.311 [INFO][4476] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--jx9n2-eth0 calico-apiserver-58bf9fd49d- calico-apiserver 5344f808-2f57-4103-9d43-e41974952208 833 0 2025-12-12 19:32:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58bf9fd49d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-i3fa2.gb1.brightbox.com calico-apiserver-58bf9fd49d-jx9n2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia8dca136c68 [] [] }} ContainerID="c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-jx9n2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--jx9n2-" Dec 12 19:32:46.609314 containerd[1670]: 2025-12-12 19:32:46.313 [INFO][4476] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-jx9n2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--jx9n2-eth0" Dec 12 19:32:46.609314 containerd[1670]: 2025-12-12 19:32:46.373 [INFO][4496] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" HandleID="k8s-pod-network.c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" Workload="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--jx9n2-eth0" Dec 12 19:32:46.609713 containerd[1670]: 2025-12-12 19:32:46.374 [INFO][4496] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" HandleID="k8s-pod-network.c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" Workload="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--jx9n2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-i3fa2.gb1.brightbox.com", "pod":"calico-apiserver-58bf9fd49d-jx9n2", "timestamp":"2025-12-12 19:32:46.373882707 +0000 UTC"}, Hostname:"srv-i3fa2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:32:46.609713 containerd[1670]: 2025-12-12 19:32:46.374 [INFO][4496] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:32:46.609713 containerd[1670]: 2025-12-12 19:32:46.400 [INFO][4496] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:32:46.609713 containerd[1670]: 2025-12-12 19:32:46.401 [INFO][4496] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i3fa2.gb1.brightbox.com' Dec 12 19:32:46.609713 containerd[1670]: 2025-12-12 19:32:46.485 [INFO][4496] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.609713 containerd[1670]: 2025-12-12 19:32:46.506 [INFO][4496] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.609713 containerd[1670]: 2025-12-12 19:32:46.522 [INFO][4496] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.609713 containerd[1670]: 2025-12-12 19:32:46.526 [INFO][4496] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.609713 containerd[1670]: 2025-12-12 19:32:46.530 [INFO][4496] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.611586 containerd[1670]: 2025-12-12 19:32:46.530 [INFO][4496] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.611586 containerd[1670]: 2025-12-12 19:32:46.533 [INFO][4496] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8 Dec 12 19:32:46.611586 containerd[1670]: 2025-12-12 19:32:46.552 [INFO][4496] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.611586 containerd[1670]: 2025-12-12 19:32:46.563 [INFO][4496] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.68/26] block=192.168.98.64/26 handle="k8s-pod-network.c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.611586 containerd[1670]: 2025-12-12 19:32:46.563 [INFO][4496] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.68/26] handle="k8s-pod-network.c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:46.611586 containerd[1670]: 2025-12-12 19:32:46.563 [INFO][4496] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:32:46.611586 containerd[1670]: 2025-12-12 19:32:46.563 [INFO][4496] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.68/26] IPv6=[] ContainerID="c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" HandleID="k8s-pod-network.c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" Workload="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--jx9n2-eth0" Dec 12 19:32:46.611864 containerd[1670]: 2025-12-12 19:32:46.574 [INFO][4476] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-jx9n2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--jx9n2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--jx9n2-eth0", GenerateName:"calico-apiserver-58bf9fd49d-", Namespace:"calico-apiserver", SelfLink:"", UID:"5344f808-2f57-4103-9d43-e41974952208", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58bf9fd49d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-58bf9fd49d-jx9n2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia8dca136c68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:46.611960 containerd[1670]: 2025-12-12 19:32:46.574 [INFO][4476] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.68/32] ContainerID="c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-jx9n2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--jx9n2-eth0" Dec 12 19:32:46.611960 containerd[1670]: 2025-12-12 19:32:46.574 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8dca136c68 ContainerID="c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-jx9n2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--jx9n2-eth0" Dec 12 19:32:46.611960 containerd[1670]: 2025-12-12 19:32:46.585 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-jx9n2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--jx9n2-eth0" Dec 12 19:32:46.612047 containerd[1670]: 2025-12-12 19:32:46.586 [INFO][4476] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-jx9n2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--jx9n2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--jx9n2-eth0", GenerateName:"calico-apiserver-58bf9fd49d-", Namespace:"calico-apiserver", SelfLink:"", UID:"5344f808-2f57-4103-9d43-e41974952208", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58bf9fd49d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8", Pod:"calico-apiserver-58bf9fd49d-jx9n2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia8dca136c68", MAC:"7e:65:6a:f7:7b:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:46.612113 containerd[1670]: 2025-12-12 19:32:46.603 [INFO][4476] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-jx9n2" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--jx9n2-eth0" Dec 12 19:32:46.616000 audit: BPF prog-id=225 op=LOAD Dec 12 19:32:46.619484 kernel: audit: type=1334 audit(1765567966.616:679): prog-id=225 op=LOAD Dec 12 19:32:46.624223 kernel: audit: type=1334 audit(1765567966.619:680): prog-id=226 op=LOAD Dec 12 19:32:46.624408 kernel: audit: type=1300 audit(1765567966.619:680): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=4522 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.619000 audit: BPF prog-id=226 op=LOAD Dec 12 19:32:46.619000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=4522 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393934303436663363366364643832336538666531376330346633 Dec 12 19:32:46.629709 kernel: audit: type=1327 audit(1765567966.619:680): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393934303436663363366364643832336538666531376330346633 Dec 12 19:32:46.619000 audit: BPF prog-id=226 op=UNLOAD Dec 12 19:32:46.636479 kernel: audit: type=1334 audit(1765567966.619:681): prog-id=226 op=UNLOAD Dec 12 19:32:46.649882 kernel: audit: type=1300 audit(1765567966.619:681): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4522 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.619000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4522 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393934303436663363366364643832336538666531376330346633 Dec 12 19:32:46.656717 kernel: audit: type=1327 audit(1765567966.619:681): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393934303436663363366364643832336538666531376330346633 Dec 12 19:32:46.619000 audit: BPF prog-id=227 op=LOAD Dec 12 19:32:46.619000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=4522 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393934303436663363366364643832336538666531376330346633 Dec 12 19:32:46.619000 audit: BPF prog-id=228 op=LOAD Dec 12 19:32:46.619000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=4522 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393934303436663363366364643832336538666531376330346633 Dec 12 19:32:46.619000 audit: BPF prog-id=228 op=UNLOAD Dec 12 19:32:46.619000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4522 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393934303436663363366364643832336538666531376330346633 Dec 12 19:32:46.619000 audit: BPF prog-id=227 op=UNLOAD Dec 12 19:32:46.619000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4522 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393934303436663363366364643832336538666531376330346633 Dec 12 19:32:46.619000 audit: BPF prog-id=229 op=LOAD Dec 12 19:32:46.619000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=4522 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336393934303436663363366364643832336538666531376330346633 Dec 12 19:32:46.643000 audit[4559]: NETFILTER_CFG table=filter:129 family=2 entries=54 op=nft_register_chain pid=4559 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 19:32:46.643000 audit[4559]: SYSCALL arch=c000003e syscall=46 success=yes exit=29380 a0=3 a1=7ffeb55a58a0 a2=0 a3=7ffeb55a588c items=0 ppid=4178 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.643000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 19:32:46.666770 kubelet[2938]: E1212 19:32:46.666582 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:32:46.676461 containerd[1670]: time="2025-12-12T19:32:46.674938596Z" level=info msg="connecting to shim c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8" address="unix:///run/containerd/s/f859a6d19cf321d7a3c0ff03bd1437b5e01d79f65d4ca9f3ab9e7c5601d1ab6b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:32:46.734799 systemd[1]: Started cri-containerd-c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8.scope - libcontainer container c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8. Dec 12 19:32:46.749902 containerd[1670]: time="2025-12-12T19:32:46.749825338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dd47b96bf-8h6g2,Uid:efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac,Namespace:calico-system,Attempt:0,} returns sandbox id \"36994046f3c6cdd823e8fe17c04f3d23afaf12b374ba2964ebecbed948a97983\"" Dec 12 19:32:46.755466 containerd[1670]: time="2025-12-12T19:32:46.754541291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 19:32:46.755000 audit: BPF prog-id=230 op=LOAD Dec 12 19:32:46.756000 audit: BPF prog-id=231 op=LOAD Dec 12 19:32:46.756000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4568 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330393265366533363839653365393030303538326139326238383063 Dec 12 19:32:46.756000 audit: BPF prog-id=231 op=UNLOAD Dec 12 19:32:46.756000 audit[4579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4568 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330393265366533363839653365393030303538326139326238383063 Dec 12 19:32:46.756000 audit: BPF prog-id=232 op=LOAD Dec 12 19:32:46.756000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4568 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330393265366533363839653365393030303538326139326238383063 Dec 12 19:32:46.757000 audit: BPF prog-id=233 op=LOAD Dec 12 19:32:46.757000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4568 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330393265366533363839653365393030303538326139326238383063 Dec 12 19:32:46.757000 audit: BPF prog-id=233 op=UNLOAD Dec 12 19:32:46.757000 audit[4579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4568 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330393265366533363839653365393030303538326139326238383063 Dec 12 19:32:46.758000 audit: BPF prog-id=232 op=UNLOAD Dec 12 19:32:46.758000 audit[4579]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4568 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330393265366533363839653365393030303538326139326238383063 Dec 12 19:32:46.758000 audit: BPF prog-id=234 op=LOAD Dec 12 19:32:46.758000 audit[4579]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4568 pid=4579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:46.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330393265366533363839653365393030303538326139326238383063 Dec 12 19:32:46.812833 containerd[1670]: time="2025-12-12T19:32:46.812788853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bf9fd49d-jx9n2,Uid:5344f808-2f57-4103-9d43-e41974952208,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c092e6e3689e3e9000582a92b880c9eda783d7b827b041c6c6cf51ad46472ea8\"" Dec 12 19:32:47.067660 containerd[1670]: time="2025-12-12T19:32:47.067185920Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:32:47.068976 containerd[1670]: time="2025-12-12T19:32:47.068645200Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 19:32:47.069719 kubelet[2938]: E1212 19:32:47.069500 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:32:47.071372 kubelet[2938]: E1212 19:32:47.070070 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:32:47.071372 kubelet[2938]: E1212 19:32:47.070492 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gsqn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5dd47b96bf-8h6g2_calico-system(efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 19:32:47.071921 kubelet[2938]: E1212 19:32:47.071878 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" podUID="efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac" Dec 12 19:32:47.072997 containerd[1670]: time="2025-12-12T19:32:47.068697788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 19:32:47.072997 containerd[1670]: time="2025-12-12T19:32:47.071332569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:32:47.212417 containerd[1670]: time="2025-12-12T19:32:47.212015374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2v62p,Uid:92bfb34d-8a18-4385-8d42-b49803a8d3e6,Namespace:kube-system,Attempt:0,}" Dec 12 19:32:47.212843 containerd[1670]: time="2025-12-12T19:32:47.212657924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bf9fd49d-cblcg,Uid:bde65fa8-758f-4f39-b274-b1238cc0fdac,Namespace:calico-apiserver,Attempt:0,}" Dec 12 19:32:47.328160 systemd-networkd[1573]: calied2bafb08f3: Gained IPv6LL Dec 12 19:32:47.376882 containerd[1670]: time="2025-12-12T19:32:47.376833862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:32:47.378066 containerd[1670]: time="2025-12-12T19:32:47.378031960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:32:47.379544 containerd[1670]: time="2025-12-12T19:32:47.378089259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 19:32:47.380196 kubelet[2938]: E1212 19:32:47.379808 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:32:47.380196 kubelet[2938]: E1212 19:32:47.379868 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:32:47.390141 kubelet[2938]: E1212 19:32:47.380079 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pj2pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bf9fd49d-jx9n2_calico-apiserver(5344f808-2f57-4103-9d43-e41974952208): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:32:47.393027 kubelet[2938]: E1212 19:32:47.392967 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" podUID="5344f808-2f57-4103-9d43-e41974952208" Dec 12 19:32:47.410590 systemd-networkd[1573]: cali6c25b6af340: Link UP Dec 12 19:32:47.421698 systemd-networkd[1573]: cali6c25b6af340: Gained carrier Dec 12 19:32:47.451994 containerd[1670]: 2025-12-12 19:32:47.274 [INFO][4615] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--2v62p-eth0 coredns-674b8bbfcf- kube-system 92bfb34d-8a18-4385-8d42-b49803a8d3e6 838 0 2025-12-12 19:32:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-i3fa2.gb1.brightbox.com coredns-674b8bbfcf-2v62p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6c25b6af340 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v62p" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--2v62p-" Dec 12 19:32:47.451994 containerd[1670]: 2025-12-12 19:32:47.275 [INFO][4615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v62p" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--2v62p-eth0" Dec 12 19:32:47.451994 containerd[1670]: 2025-12-12 19:32:47.326 [INFO][4636] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" HandleID="k8s-pod-network.c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" Workload="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--2v62p-eth0" Dec 12 19:32:47.452258 containerd[1670]: 2025-12-12 19:32:47.326 [INFO][4636] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" HandleID="k8s-pod-network.c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" Workload="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--2v62p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf9b0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-i3fa2.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-2v62p", "timestamp":"2025-12-12 19:32:47.326672806 +0000 UTC"}, Hostname:"srv-i3fa2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:32:47.452258 containerd[1670]: 2025-12-12 19:32:47.327 [INFO][4636] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:32:47.452258 containerd[1670]: 2025-12-12 19:32:47.327 [INFO][4636] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:32:47.452258 containerd[1670]: 2025-12-12 19:32:47.327 [INFO][4636] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i3fa2.gb1.brightbox.com' Dec 12 19:32:47.452258 containerd[1670]: 2025-12-12 19:32:47.338 [INFO][4636] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.452258 containerd[1670]: 2025-12-12 19:32:47.344 [INFO][4636] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.452258 containerd[1670]: 2025-12-12 19:32:47.354 [INFO][4636] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.452258 containerd[1670]: 2025-12-12 19:32:47.358 [INFO][4636] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.452258 containerd[1670]: 2025-12-12 19:32:47.364 [INFO][4636] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.452546 containerd[1670]: 2025-12-12 19:32:47.364 [INFO][4636] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.452546 containerd[1670]: 2025-12-12 19:32:47.366 [INFO][4636] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd Dec 12 19:32:47.452546 containerd[1670]: 2025-12-12 19:32:47.376 [INFO][4636] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.452546 containerd[1670]: 2025-12-12 19:32:47.397 [INFO][4636] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.69/26] block=192.168.98.64/26 handle="k8s-pod-network.c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.452546 containerd[1670]: 2025-12-12 19:32:47.398 [INFO][4636] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.69/26] handle="k8s-pod-network.c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.452546 containerd[1670]: 2025-12-12 19:32:47.398 [INFO][4636] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:32:47.452546 containerd[1670]: 2025-12-12 19:32:47.398 [INFO][4636] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.69/26] IPv6=[] ContainerID="c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" HandleID="k8s-pod-network.c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" Workload="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--2v62p-eth0" Dec 12 19:32:47.452791 containerd[1670]: 2025-12-12 19:32:47.406 [INFO][4615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v62p" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--2v62p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--2v62p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"92bfb34d-8a18-4385-8d42-b49803a8d3e6", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-2v62p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c25b6af340", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:47.452791 containerd[1670]: 2025-12-12 19:32:47.406 [INFO][4615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.69/32] ContainerID="c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v62p" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--2v62p-eth0" Dec 12 19:32:47.452791 containerd[1670]: 2025-12-12 19:32:47.406 [INFO][4615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c25b6af340 ContainerID="c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v62p" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--2v62p-eth0" Dec 12 19:32:47.452791 containerd[1670]: 2025-12-12 19:32:47.421 [INFO][4615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v62p" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--2v62p-eth0" Dec 12 19:32:47.452791 containerd[1670]: 2025-12-12 19:32:47.425 [INFO][4615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v62p" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--2v62p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--2v62p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"92bfb34d-8a18-4385-8d42-b49803a8d3e6", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd", Pod:"coredns-674b8bbfcf-2v62p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6c25b6af340", MAC:"be:7d:45:02:c2:63", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:47.452791 containerd[1670]: 2025-12-12 19:32:47.445 [INFO][4615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" Namespace="kube-system" Pod="coredns-674b8bbfcf-2v62p" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--2v62p-eth0" Dec 12 19:32:47.497779 containerd[1670]: time="2025-12-12T19:32:47.497717283Z" level=info msg="connecting to shim c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd" address="unix:///run/containerd/s/dd87a05da7707afb00eb47cb41651650ae9b622a61d63caeab99b8d094ff8d63" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:32:47.558685 systemd[1]: Started cri-containerd-c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd.scope - libcontainer container c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd. Dec 12 19:32:47.602508 systemd-networkd[1573]: cali2c3e4521325: Link UP Dec 12 19:32:47.603819 systemd-networkd[1573]: cali2c3e4521325: Gained carrier Dec 12 19:32:47.606000 audit: BPF prog-id=235 op=LOAD Dec 12 19:32:47.608000 audit: BPF prog-id=236 op=LOAD Dec 12 19:32:47.608000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4665 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335356465663538643366663863386463396639613664303436646135 Dec 12 19:32:47.608000 audit: BPF prog-id=236 op=UNLOAD Dec 12 19:32:47.608000 audit[4677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4665 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335356465663538643366663863386463396639613664303436646135 Dec 12 19:32:47.609000 audit: BPF prog-id=237 op=LOAD Dec 12 19:32:47.609000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4665 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335356465663538643366663863386463396639613664303436646135 Dec 12 19:32:47.609000 audit: BPF prog-id=238 op=LOAD Dec 12 19:32:47.609000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4665 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335356465663538643366663863386463396639613664303436646135 Dec 12 19:32:47.612000 audit: BPF prog-id=238 op=UNLOAD Dec 12 19:32:47.612000 audit[4677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4665 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335356465663538643366663863386463396639613664303436646135 Dec 12 19:32:47.612000 audit: BPF prog-id=237 op=UNLOAD Dec 12 19:32:47.612000 audit[4677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4665 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335356465663538643366663863386463396639613664303436646135 Dec 12 19:32:47.612000 audit: BPF prog-id=239 op=LOAD Dec 12 19:32:47.612000 audit[4677]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4665 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335356465663538643366663863386463396639613664303436646135 Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.279 [INFO][4612] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--cblcg-eth0 calico-apiserver-58bf9fd49d- calico-apiserver bde65fa8-758f-4f39-b274-b1238cc0fdac 830 0 2025-12-12 19:32:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58bf9fd49d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-i3fa2.gb1.brightbox.com calico-apiserver-58bf9fd49d-cblcg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2c3e4521325 [] [] }} ContainerID="b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-cblcg" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--cblcg-" Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.279 [INFO][4612] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-cblcg" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--cblcg-eth0" Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.337 [INFO][4639] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" HandleID="k8s-pod-network.b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" Workload="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--cblcg-eth0" Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.337 [INFO][4639] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" HandleID="k8s-pod-network.b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" Workload="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--cblcg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-i3fa2.gb1.brightbox.com", "pod":"calico-apiserver-58bf9fd49d-cblcg", "timestamp":"2025-12-12 19:32:47.337496389 +0000 UTC"}, Hostname:"srv-i3fa2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.337 [INFO][4639] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.398 [INFO][4639] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.398 [INFO][4639] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i3fa2.gb1.brightbox.com' Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.442 [INFO][4639] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.456 [INFO][4639] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.504 [INFO][4639] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.523 [INFO][4639] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.529 [INFO][4639] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.529 [INFO][4639] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.533 [INFO][4639] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883 Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.554 [INFO][4639] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.585 [INFO][4639] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.70/26] block=192.168.98.64/26 handle="k8s-pod-network.b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.586 [INFO][4639] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.70/26] handle="k8s-pod-network.b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.586 [INFO][4639] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:32:47.644227 containerd[1670]: 2025-12-12 19:32:47.586 [INFO][4639] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.70/26] IPv6=[] ContainerID="b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" HandleID="k8s-pod-network.b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" Workload="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--cblcg-eth0" Dec 12 19:32:47.644996 containerd[1670]: 2025-12-12 19:32:47.593 [INFO][4612] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-cblcg" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--cblcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--cblcg-eth0", GenerateName:"calico-apiserver-58bf9fd49d-", Namespace:"calico-apiserver", SelfLink:"", UID:"bde65fa8-758f-4f39-b274-b1238cc0fdac", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58bf9fd49d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-58bf9fd49d-cblcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c3e4521325", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:47.644996 containerd[1670]: 2025-12-12 19:32:47.595 [INFO][4612] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.70/32] ContainerID="b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-cblcg" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--cblcg-eth0" Dec 12 19:32:47.644996 containerd[1670]: 2025-12-12 19:32:47.595 [INFO][4612] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c3e4521325 ContainerID="b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-cblcg" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--cblcg-eth0" Dec 12 19:32:47.644996 containerd[1670]: 2025-12-12 19:32:47.612 [INFO][4612] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-cblcg" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--cblcg-eth0" Dec 12 19:32:47.644996 containerd[1670]: 2025-12-12 19:32:47.616 [INFO][4612] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-cblcg" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--cblcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--cblcg-eth0", GenerateName:"calico-apiserver-58bf9fd49d-", Namespace:"calico-apiserver", SelfLink:"", UID:"bde65fa8-758f-4f39-b274-b1238cc0fdac", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58bf9fd49d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883", Pod:"calico-apiserver-58bf9fd49d-cblcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c3e4521325", MAC:"fe:08:c0:6d:8f:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:47.644996 containerd[1670]: 2025-12-12 19:32:47.640 [INFO][4612] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" Namespace="calico-apiserver" Pod="calico-apiserver-58bf9fd49d-cblcg" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-calico--apiserver--58bf9fd49d--cblcg-eth0" Dec 12 19:32:47.690459 kubelet[2938]: E1212 19:32:47.690411 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" podUID="efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac" Dec 12 19:32:47.694210 kubelet[2938]: E1212 19:32:47.694169 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" podUID="5344f808-2f57-4103-9d43-e41974952208" Dec 12 19:32:47.698527 containerd[1670]: time="2025-12-12T19:32:47.698279435Z" level=info msg="connecting to shim b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883" address="unix:///run/containerd/s/24874292731bf8f65cfcfe1d3e4b1953ad8a9c4393cb35d4aeaac558181fc055" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:32:47.721312 containerd[1670]: time="2025-12-12T19:32:47.720345600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2v62p,Uid:92bfb34d-8a18-4385-8d42-b49803a8d3e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd\"" Dec 12 19:32:47.742000 audit[4727]: NETFILTER_CFG table=filter:130 family=2 entries=50 op=nft_register_chain pid=4727 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 19:32:47.742000 audit[4727]: SYSCALL arch=c000003e syscall=46 success=yes exit=24912 a0=3 a1=7fffbf5797b0 a2=0 a3=7fffbf57979c items=0 ppid=4178 pid=4727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.742000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 19:32:47.746989 containerd[1670]: time="2025-12-12T19:32:47.746576395Z" level=info msg="CreateContainer within sandbox \"c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 19:32:47.768728 systemd[1]: Started cri-containerd-b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883.scope - libcontainer container b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883. Dec 12 19:32:47.770784 containerd[1670]: time="2025-12-12T19:32:47.769170244Z" level=info msg="Container 173bd4b358b9c5dc21cca93aadb00558a9839bfea31995a82b7ee1c278cd6ae7: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:32:47.777458 containerd[1670]: time="2025-12-12T19:32:47.777366601Z" level=info msg="CreateContainer within sandbox \"c55def58d3ff8c8dc9f9a6d046da50bcc74fb13955d2608fcdf8701af7de8acd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"173bd4b358b9c5dc21cca93aadb00558a9839bfea31995a82b7ee1c278cd6ae7\"" Dec 12 19:32:47.779804 containerd[1670]: time="2025-12-12T19:32:47.779764728Z" level=info msg="StartContainer for \"173bd4b358b9c5dc21cca93aadb00558a9839bfea31995a82b7ee1c278cd6ae7\"" Dec 12 19:32:47.781649 containerd[1670]: time="2025-12-12T19:32:47.781614899Z" level=info msg="connecting to shim 173bd4b358b9c5dc21cca93aadb00558a9839bfea31995a82b7ee1c278cd6ae7" address="unix:///run/containerd/s/dd87a05da7707afb00eb47cb41651650ae9b622a61d63caeab99b8d094ff8d63" protocol=ttrpc version=3 Dec 12 19:32:47.795000 audit: BPF prog-id=240 op=LOAD Dec 12 19:32:47.797000 audit: BPF prog-id=241 op=LOAD Dec 12 19:32:47.797000 audit[4734]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4717 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233653139623131363731616566353239333065623361316136623837 Dec 12 19:32:47.798000 audit: BPF prog-id=241 op=UNLOAD Dec 12 19:32:47.798000 audit[4734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4717 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233653139623131363731616566353239333065623361316136623837 Dec 12 19:32:47.799000 audit: BPF prog-id=242 op=LOAD Dec 12 19:32:47.799000 audit[4734]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4717 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233653139623131363731616566353239333065623361316136623837 Dec 12 19:32:47.800000 audit: BPF prog-id=243 op=LOAD Dec 12 19:32:47.800000 audit[4734]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4717 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233653139623131363731616566353239333065623361316136623837 Dec 12 19:32:47.800000 audit: BPF prog-id=243 op=UNLOAD Dec 12 19:32:47.800000 audit[4734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4717 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233653139623131363731616566353239333065623361316136623837 Dec 12 19:32:47.801000 audit: BPF prog-id=242 op=UNLOAD Dec 12 19:32:47.801000 audit[4734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4717 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233653139623131363731616566353239333065623361316136623837 Dec 12 19:32:47.801000 audit: BPF prog-id=244 op=LOAD Dec 12 19:32:47.801000 audit[4734]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4717 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233653139623131363731616566353239333065623361316136623837 Dec 12 19:32:47.825000 audit[4767]: NETFILTER_CFG table=filter:131 family=2 entries=49 op=nft_register_chain pid=4767 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 19:32:47.825000 audit[4767]: SYSCALL arch=c000003e syscall=46 success=yes exit=25436 a0=3 a1=7fff54b4f250 a2=0 a3=7fff54b4f23c items=0 ppid=4178 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.825000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 19:32:47.827728 systemd[1]: Started cri-containerd-173bd4b358b9c5dc21cca93aadb00558a9839bfea31995a82b7ee1c278cd6ae7.scope - libcontainer container 173bd4b358b9c5dc21cca93aadb00558a9839bfea31995a82b7ee1c278cd6ae7. Dec 12 19:32:47.836000 audit[4768]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=4768 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:47.836000 audit[4768]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffee78e1f30 a2=0 a3=7ffee78e1f1c items=0 ppid=3062 pid=4768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.836000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:47.839606 systemd-networkd[1573]: calie3f808cff64: Gained IPv6LL Dec 12 19:32:47.843000 audit[4768]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=4768 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:47.843000 audit[4768]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffee78e1f30 a2=0 a3=0 items=0 ppid=3062 pid=4768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.843000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:47.850000 audit: BPF prog-id=245 op=LOAD Dec 12 19:32:47.852000 audit: BPF prog-id=246 op=LOAD Dec 12 19:32:47.852000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4665 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137336264346233353862396335646332316363613933616164623030 Dec 12 19:32:47.853000 audit: BPF prog-id=246 op=UNLOAD Dec 12 19:32:47.853000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4665 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137336264346233353862396335646332316363613933616164623030 Dec 12 19:32:47.856000 audit: BPF prog-id=247 op=LOAD Dec 12 19:32:47.856000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4665 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137336264346233353862396335646332316363613933616164623030 Dec 12 19:32:47.857000 audit: BPF prog-id=248 op=LOAD Dec 12 19:32:47.857000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4665 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137336264346233353862396335646332316363613933616164623030 Dec 12 19:32:47.857000 audit: BPF prog-id=248 op=UNLOAD Dec 12 19:32:47.857000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4665 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137336264346233353862396335646332316363613933616164623030 Dec 12 19:32:47.857000 audit: BPF prog-id=247 op=UNLOAD Dec 12 19:32:47.857000 audit[4755]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4665 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137336264346233353862396335646332316363613933616164623030 Dec 12 19:32:47.858000 audit: BPF prog-id=249 op=LOAD Dec 12 19:32:47.858000 audit[4755]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4665 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:47.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137336264346233353862396335646332316363613933616164623030 Dec 12 19:32:47.880131 containerd[1670]: time="2025-12-12T19:32:47.880078711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bf9fd49d-cblcg,Uid:bde65fa8-758f-4f39-b274-b1238cc0fdac,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b3e19b11671aef52930eb3a1a6b871435f20c4b2ecc5fa55fe6e78c80b7d9883\"" Dec 12 19:32:47.884983 containerd[1670]: time="2025-12-12T19:32:47.884916321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:32:47.899071 containerd[1670]: time="2025-12-12T19:32:47.899021901Z" level=info msg="StartContainer for \"173bd4b358b9c5dc21cca93aadb00558a9839bfea31995a82b7ee1c278cd6ae7\" returns successfully" Dec 12 19:32:48.159710 systemd-networkd[1573]: calia8dca136c68: Gained IPv6LL Dec 12 19:32:48.212324 containerd[1670]: time="2025-12-12T19:32:48.212259720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-26fhz,Uid:ae74493f-92fd-45a3-a4ca-78630ba178f3,Namespace:calico-system,Attempt:0,}" Dec 12 19:32:48.212969 containerd[1670]: time="2025-12-12T19:32:48.212884477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dm7rl,Uid:316e3085-f2ba-4438-937c-b4f18c2a87e3,Namespace:kube-system,Attempt:0,}" Dec 12 19:32:48.222692 containerd[1670]: time="2025-12-12T19:32:48.222613906Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:32:48.223500 containerd[1670]: time="2025-12-12T19:32:48.223189415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:32:48.223500 containerd[1670]: time="2025-12-12T19:32:48.223489129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 19:32:48.223959 kubelet[2938]: E1212 19:32:48.223828 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:32:48.225927 kubelet[2938]: E1212 19:32:48.223975 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:32:48.225927 kubelet[2938]: E1212 19:32:48.224364 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2sfrt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bf9fd49d-cblcg_calico-apiserver(bde65fa8-758f-4f39-b274-b1238cc0fdac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:32:48.225927 kubelet[2938]: E1212 19:32:48.225657 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" podUID="bde65fa8-758f-4f39-b274-b1238cc0fdac" Dec 12 19:32:48.399304 systemd-networkd[1573]: califd7cadba51c: Link UP Dec 12 19:32:48.401697 systemd-networkd[1573]: califd7cadba51c: Gained carrier Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.285 [INFO][4794] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dm7rl-eth0 coredns-674b8bbfcf- kube-system 316e3085-f2ba-4438-937c-b4f18c2a87e3 837 0 2025-12-12 19:32:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-i3fa2.gb1.brightbox.com coredns-674b8bbfcf-dm7rl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califd7cadba51c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm7rl" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dm7rl-" Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.285 [INFO][4794] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm7rl" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dm7rl-eth0" Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.334 [INFO][4820] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" HandleID="k8s-pod-network.71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" Workload="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dm7rl-eth0" Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.334 [INFO][4820] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" HandleID="k8s-pod-network.71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" Workload="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dm7rl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf820), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-i3fa2.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-dm7rl", "timestamp":"2025-12-12 19:32:48.33471118 +0000 UTC"}, Hostname:"srv-i3fa2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.334 [INFO][4820] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.335 [INFO][4820] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.335 [INFO][4820] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i3fa2.gb1.brightbox.com' Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.349 [INFO][4820] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.356 [INFO][4820] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.364 [INFO][4820] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.367 [INFO][4820] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.370 [INFO][4820] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.370 [INFO][4820] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.372 [INFO][4820] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7 Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.377 [INFO][4820] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.385 [INFO][4820] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.71/26] block=192.168.98.64/26 handle="k8s-pod-network.71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.385 [INFO][4820] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.71/26] handle="k8s-pod-network.71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.385 [INFO][4820] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:32:48.428764 containerd[1670]: 2025-12-12 19:32:48.385 [INFO][4820] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.71/26] IPv6=[] ContainerID="71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" HandleID="k8s-pod-network.71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" Workload="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dm7rl-eth0" Dec 12 19:32:48.432709 containerd[1670]: 2025-12-12 19:32:48.389 [INFO][4794] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm7rl" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dm7rl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dm7rl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"316e3085-f2ba-4438-937c-b4f18c2a87e3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-dm7rl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd7cadba51c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:48.432709 containerd[1670]: 2025-12-12 19:32:48.389 [INFO][4794] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.71/32] ContainerID="71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm7rl" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dm7rl-eth0" Dec 12 19:32:48.432709 containerd[1670]: 2025-12-12 19:32:48.389 [INFO][4794] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd7cadba51c ContainerID="71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm7rl" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dm7rl-eth0" Dec 12 19:32:48.432709 containerd[1670]: 2025-12-12 19:32:48.404 [INFO][4794] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm7rl" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dm7rl-eth0" Dec 12 19:32:48.432709 containerd[1670]: 2025-12-12 19:32:48.406 [INFO][4794] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm7rl" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dm7rl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dm7rl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"316e3085-f2ba-4438-937c-b4f18c2a87e3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7", Pod:"coredns-674b8bbfcf-dm7rl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd7cadba51c", MAC:"86:80:8c:00:16:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:48.432709 containerd[1670]: 2025-12-12 19:32:48.420 [INFO][4794] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dm7rl" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dm7rl-eth0" Dec 12 19:32:48.486105 containerd[1670]: time="2025-12-12T19:32:48.484977304Z" level=info msg="connecting to shim 71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7" address="unix:///run/containerd/s/32ac51b892ae65c52cf26ffffcbf2194af64687589dde0d502d943504ff0cc20" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:32:48.488000 audit[4855]: NETFILTER_CFG table=filter:134 family=2 entries=48 op=nft_register_chain pid=4855 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 19:32:48.488000 audit[4855]: SYSCALL arch=c000003e syscall=46 success=yes exit=22704 a0=3 a1=7ffe435e1130 a2=0 a3=7ffe435e111c items=0 ppid=4178 pid=4855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.488000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 19:32:48.520211 systemd-networkd[1573]: calibe5091ae409: Link UP Dec 12 19:32:48.525580 systemd-networkd[1573]: calibe5091ae409: Gained carrier Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.290 [INFO][4795] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i3fa2.gb1.brightbox.com-k8s-goldmane--666569f655--26fhz-eth0 goldmane-666569f655- calico-system ae74493f-92fd-45a3-a4ca-78630ba178f3 836 0 2025-12-12 19:32:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-i3fa2.gb1.brightbox.com goldmane-666569f655-26fhz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibe5091ae409 [] [] }} ContainerID="f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" Namespace="calico-system" Pod="goldmane-666569f655-26fhz" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-goldmane--666569f655--26fhz-" Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.292 [INFO][4795] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" Namespace="calico-system" Pod="goldmane-666569f655-26fhz" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-goldmane--666569f655--26fhz-eth0" Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.343 [INFO][4825] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" HandleID="k8s-pod-network.f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" Workload="srv--i3fa2.gb1.brightbox.com-k8s-goldmane--666569f655--26fhz-eth0" Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.343 [INFO][4825] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" HandleID="k8s-pod-network.f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" Workload="srv--i3fa2.gb1.brightbox.com-k8s-goldmane--666569f655--26fhz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cfd40), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-i3fa2.gb1.brightbox.com", "pod":"goldmane-666569f655-26fhz", "timestamp":"2025-12-12 19:32:48.343406895 +0000 UTC"}, Hostname:"srv-i3fa2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.343 [INFO][4825] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.387 [INFO][4825] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.387 [INFO][4825] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i3fa2.gb1.brightbox.com' Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.450 [INFO][4825] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.458 [INFO][4825] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.472 [INFO][4825] ipam/ipam.go 511: Trying affinity for 192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.479 [INFO][4825] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.484 [INFO][4825] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.64/26 host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.484 [INFO][4825] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.64/26 handle="k8s-pod-network.f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.487 [INFO][4825] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902 Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.496 [INFO][4825] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.64/26 handle="k8s-pod-network.f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.509 [INFO][4825] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.72/26] block=192.168.98.64/26 handle="k8s-pod-network.f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.509 [INFO][4825] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.72/26] handle="k8s-pod-network.f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" host="srv-i3fa2.gb1.brightbox.com" Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.510 [INFO][4825] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 19:32:48.558938 containerd[1670]: 2025-12-12 19:32:48.510 [INFO][4825] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.72/26] IPv6=[] ContainerID="f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" HandleID="k8s-pod-network.f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" Workload="srv--i3fa2.gb1.brightbox.com-k8s-goldmane--666569f655--26fhz-eth0" Dec 12 19:32:48.559667 containerd[1670]: 2025-12-12 19:32:48.515 [INFO][4795] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" Namespace="calico-system" Pod="goldmane-666569f655-26fhz" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-goldmane--666569f655--26fhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-goldmane--666569f655--26fhz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ae74493f-92fd-45a3-a4ca-78630ba178f3", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-26fhz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.98.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibe5091ae409", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:48.559667 containerd[1670]: 2025-12-12 19:32:48.516 [INFO][4795] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.72/32] ContainerID="f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" Namespace="calico-system" Pod="goldmane-666569f655-26fhz" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-goldmane--666569f655--26fhz-eth0" Dec 12 19:32:48.559667 containerd[1670]: 2025-12-12 19:32:48.516 [INFO][4795] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe5091ae409 ContainerID="f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" Namespace="calico-system" Pod="goldmane-666569f655-26fhz" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-goldmane--666569f655--26fhz-eth0" Dec 12 19:32:48.559667 containerd[1670]: 2025-12-12 19:32:48.526 [INFO][4795] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" Namespace="calico-system" Pod="goldmane-666569f655-26fhz" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-goldmane--666569f655--26fhz-eth0" Dec 12 19:32:48.559667 containerd[1670]: 2025-12-12 19:32:48.527 [INFO][4795] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" Namespace="calico-system" Pod="goldmane-666569f655-26fhz" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-goldmane--666569f655--26fhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i3fa2.gb1.brightbox.com-k8s-goldmane--666569f655--26fhz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ae74493f-92fd-45a3-a4ca-78630ba178f3", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 19, 32, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i3fa2.gb1.brightbox.com", ContainerID:"f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902", Pod:"goldmane-666569f655-26fhz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.98.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibe5091ae409", MAC:"92:27:2e:0b:e8:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 19:32:48.559667 containerd[1670]: 2025-12-12 19:32:48.544 [INFO][4795] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" Namespace="calico-system" Pod="goldmane-666569f655-26fhz" WorkloadEndpoint="srv--i3fa2.gb1.brightbox.com-k8s-goldmane--666569f655--26fhz-eth0" Dec 12 19:32:48.563443 systemd[1]: Started cri-containerd-71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7.scope - libcontainer container 71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7. Dec 12 19:32:48.590000 audit: BPF prog-id=250 op=LOAD Dec 12 19:32:48.591000 audit: BPF prog-id=251 op=LOAD Dec 12 19:32:48.591000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4854 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.591000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731303537636230366139643635336634353636633663336230363862 Dec 12 19:32:48.591000 audit: BPF prog-id=251 op=UNLOAD Dec 12 19:32:48.591000 audit[4867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4854 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.591000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731303537636230366139643635336634353636633663336230363862 Dec 12 19:32:48.592000 audit: BPF prog-id=252 op=LOAD Dec 12 19:32:48.592000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4854 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731303537636230366139643635336634353636633663336230363862 Dec 12 19:32:48.592000 audit: BPF prog-id=253 op=LOAD Dec 12 19:32:48.592000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4854 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731303537636230366139643635336634353636633663336230363862 Dec 12 19:32:48.593000 audit: BPF prog-id=253 op=UNLOAD Dec 12 19:32:48.593000 audit[4867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4854 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731303537636230366139643635336634353636633663336230363862 Dec 12 19:32:48.593000 audit: BPF prog-id=252 op=UNLOAD Dec 12 19:32:48.593000 audit[4867]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4854 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731303537636230366139643635336634353636633663336230363862 Dec 12 19:32:48.593000 audit: BPF prog-id=254 op=LOAD Dec 12 19:32:48.593000 audit[4867]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4854 pid=4867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731303537636230366139643635336634353636633663336230363862 Dec 12 19:32:48.601289 containerd[1670]: time="2025-12-12T19:32:48.601248667Z" level=info msg="connecting to shim f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902" address="unix:///run/containerd/s/324584f539098cc84beb94a7361587e2c981bbb8b55b308a81fa483d0724866d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 19:32:48.634000 audit[4917]: NETFILTER_CFG table=filter:135 family=2 entries=70 op=nft_register_chain pid=4917 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 19:32:48.634000 audit[4917]: SYSCALL arch=c000003e syscall=46 success=yes exit=33956 a0=3 a1=7ffe93e76410 a2=0 a3=7ffe93e763fc items=0 ppid=4178 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.634000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 19:32:48.666810 systemd[1]: Started cri-containerd-f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902.scope - libcontainer container f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902. Dec 12 19:32:48.671609 systemd-networkd[1573]: cali2c3e4521325: Gained IPv6LL Dec 12 19:32:48.681783 containerd[1670]: time="2025-12-12T19:32:48.681097475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dm7rl,Uid:316e3085-f2ba-4438-937c-b4f18c2a87e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7\"" Dec 12 19:32:48.693902 containerd[1670]: time="2025-12-12T19:32:48.693804750Z" level=info msg="CreateContainer within sandbox \"71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 19:32:48.693000 audit: BPF prog-id=255 op=LOAD Dec 12 19:32:48.694000 audit: BPF prog-id=256 op=LOAD Dec 12 19:32:48.694000 audit[4916]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634313163626362663166313230393266343130656561376461633564 Dec 12 19:32:48.695000 audit: BPF prog-id=256 op=UNLOAD Dec 12 19:32:48.695000 audit[4916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634313163626362663166313230393266343130656561376461633564 Dec 12 19:32:48.695000 audit: BPF prog-id=257 op=LOAD Dec 12 19:32:48.695000 audit[4916]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634313163626362663166313230393266343130656561376461633564 Dec 12 19:32:48.695000 audit: BPF prog-id=258 op=LOAD Dec 12 19:32:48.695000 audit[4916]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634313163626362663166313230393266343130656561376461633564 Dec 12 19:32:48.695000 audit: BPF prog-id=258 op=UNLOAD Dec 12 19:32:48.695000 audit[4916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634313163626362663166313230393266343130656561376461633564 Dec 12 19:32:48.695000 audit: BPF prog-id=257 op=UNLOAD Dec 12 19:32:48.695000 audit[4916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634313163626362663166313230393266343130656561376461633564 Dec 12 19:32:48.696000 audit: BPF prog-id=259 op=LOAD Dec 12 19:32:48.696000 audit[4916]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4905 pid=4916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634313163626362663166313230393266343130656561376461633564 Dec 12 19:32:48.703287 kubelet[2938]: E1212 19:32:48.703255 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" podUID="bde65fa8-758f-4f39-b274-b1238cc0fdac" Dec 12 19:32:48.710015 kubelet[2938]: E1212 19:32:48.709977 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" podUID="5344f808-2f57-4103-9d43-e41974952208" Dec 12 19:32:48.711061 kubelet[2938]: E1212 19:32:48.711021 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" podUID="efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac" Dec 12 19:32:48.717043 containerd[1670]: time="2025-12-12T19:32:48.717008692Z" level=info msg="Container e93abd8f394bc0f85e0ba939da4ee7da9de4be13a3889b91d1b7160c6290286b: CDI devices from CRI Config.CDIDevices: []" Dec 12 19:32:48.726706 containerd[1670]: time="2025-12-12T19:32:48.726666943Z" level=info msg="CreateContainer within sandbox \"71057cb06a9d653f4566c6c3b068b353bc268a801ad74cca9b9616e9bcc883c7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e93abd8f394bc0f85e0ba939da4ee7da9de4be13a3889b91d1b7160c6290286b\"" Dec 12 19:32:48.727338 containerd[1670]: time="2025-12-12T19:32:48.727315959Z" level=info msg="StartContainer for \"e93abd8f394bc0f85e0ba939da4ee7da9de4be13a3889b91d1b7160c6290286b\"" Dec 12 19:32:48.728203 containerd[1670]: time="2025-12-12T19:32:48.728176104Z" level=info msg="connecting to shim e93abd8f394bc0f85e0ba939da4ee7da9de4be13a3889b91d1b7160c6290286b" address="unix:///run/containerd/s/32ac51b892ae65c52cf26ffffcbf2194af64687589dde0d502d943504ff0cc20" protocol=ttrpc version=3 Dec 12 19:32:48.765958 systemd[1]: Started cri-containerd-e93abd8f394bc0f85e0ba939da4ee7da9de4be13a3889b91d1b7160c6290286b.scope - libcontainer container e93abd8f394bc0f85e0ba939da4ee7da9de4be13a3889b91d1b7160c6290286b. Dec 12 19:32:48.811722 kubelet[2938]: I1212 19:32:48.804146 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2v62p" podStartSLOduration=48.804118399000004 podStartE2EDuration="48.804118399s" podCreationTimestamp="2025-12-12 19:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 19:32:48.761063617 +0000 UTC m=+53.779215539" watchObservedRunningTime="2025-12-12 19:32:48.804118399 +0000 UTC m=+53.822270389" Dec 12 19:32:48.817000 audit: BPF prog-id=260 op=LOAD Dec 12 19:32:48.821000 audit: BPF prog-id=261 op=LOAD Dec 12 19:32:48.821000 audit[4943]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4854 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539336162643866333934626330663835653062613933396461346565 Dec 12 19:32:48.821000 audit: BPF prog-id=261 op=UNLOAD Dec 12 19:32:48.821000 audit[4943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4854 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539336162643866333934626330663835653062613933396461346565 Dec 12 19:32:48.821000 audit: BPF prog-id=262 op=LOAD Dec 12 19:32:48.821000 audit[4943]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4854 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539336162643866333934626330663835653062613933396461346565 Dec 12 19:32:48.822000 audit: BPF prog-id=263 op=LOAD Dec 12 19:32:48.822000 audit[4943]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4854 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539336162643866333934626330663835653062613933396461346565 Dec 12 19:32:48.823000 audit: BPF prog-id=263 op=UNLOAD Dec 12 19:32:48.824000 audit[4969]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=4969 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:48.824000 audit[4969]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff83babc00 a2=0 a3=7fff83babbec items=0 ppid=3062 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.824000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:48.823000 audit[4943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4854 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539336162643866333934626330663835653062613933396461346565 Dec 12 19:32:48.825000 audit: BPF prog-id=262 op=UNLOAD Dec 12 19:32:48.825000 audit[4943]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4854 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539336162643866333934626330663835653062613933396461346565 Dec 12 19:32:48.825000 audit: BPF prog-id=264 op=LOAD Dec 12 19:32:48.825000 audit[4943]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4854 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539336162643866333934626330663835653062613933396461346565 Dec 12 19:32:48.828032 containerd[1670]: time="2025-12-12T19:32:48.825704600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-26fhz,Uid:ae74493f-92fd-45a3-a4ca-78630ba178f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"f411cbcbf1f12092f410eea7dac5d4b56772f0e4e78b6423fef317433e527902\"" Dec 12 19:32:48.828000 audit[4969]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=4969 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:48.828000 audit[4969]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff83babc00 a2=0 a3=0 items=0 ppid=3062 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.828000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:48.831807 containerd[1670]: time="2025-12-12T19:32:48.831777194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 19:32:48.872354 containerd[1670]: time="2025-12-12T19:32:48.872314509Z" level=info msg="StartContainer for \"e93abd8f394bc0f85e0ba939da4ee7da9de4be13a3889b91d1b7160c6290286b\" returns successfully" Dec 12 19:32:48.883000 audit[4977]: NETFILTER_CFG table=filter:138 family=2 entries=20 op=nft_register_rule pid=4977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:48.883000 audit[4977]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffce03fd510 a2=0 a3=7ffce03fd4fc items=0 ppid=3062 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.883000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:48.887000 audit[4977]: NETFILTER_CFG table=nat:139 family=2 entries=14 op=nft_register_rule pid=4977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:48.887000 audit[4977]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffce03fd510 a2=0 a3=0 items=0 ppid=3062 pid=4977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:48.887000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:49.154025 containerd[1670]: time="2025-12-12T19:32:49.153780470Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:32:49.155517 containerd[1670]: time="2025-12-12T19:32:49.155465569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 19:32:49.156143 containerd[1670]: time="2025-12-12T19:32:49.155599161Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 19:32:49.157198 kubelet[2938]: E1212 19:32:49.156539 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:32:49.157198 kubelet[2938]: E1212 19:32:49.156606 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:32:49.157198 kubelet[2938]: E1212 19:32:49.156846 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzgz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-26fhz_calico-system(ae74493f-92fd-45a3-a4ca-78630ba178f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 19:32:49.159465 kubelet[2938]: E1212 19:32:49.158877 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26fhz" podUID="ae74493f-92fd-45a3-a4ca-78630ba178f3" Dec 12 19:32:49.311819 systemd-networkd[1573]: cali6c25b6af340: Gained IPv6LL Dec 12 19:32:49.567813 systemd-networkd[1573]: calibe5091ae409: Gained IPv6LL Dec 12 19:32:49.735668 kubelet[2938]: E1212 19:32:49.735024 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26fhz" podUID="ae74493f-92fd-45a3-a4ca-78630ba178f3" Dec 12 19:32:49.739671 kubelet[2938]: E1212 19:32:49.739625 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" podUID="bde65fa8-758f-4f39-b274-b1238cc0fdac" Dec 12 19:32:49.754487 kubelet[2938]: I1212 19:32:49.752670 2938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dm7rl" podStartSLOduration=49.752635974 podStartE2EDuration="49.752635974s" podCreationTimestamp="2025-12-12 19:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 19:32:49.750660266 +0000 UTC m=+54.768812205" watchObservedRunningTime="2025-12-12 19:32:49.752635974 +0000 UTC m=+54.770787950" Dec 12 19:32:49.845000 audit[4992]: NETFILTER_CFG table=filter:140 family=2 entries=20 op=nft_register_rule pid=4992 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:49.845000 audit[4992]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffff5090560 a2=0 a3=7ffff509054c items=0 ppid=3062 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:49.845000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:49.853000 audit[4992]: NETFILTER_CFG table=nat:141 family=2 entries=14 op=nft_register_rule pid=4992 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:49.853000 audit[4992]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffff5090560 a2=0 a3=0 items=0 ppid=3062 pid=4992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:49.853000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:50.399794 systemd-networkd[1573]: califd7cadba51c: Gained IPv6LL Dec 12 19:32:50.737930 kubelet[2938]: E1212 19:32:50.737769 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26fhz" podUID="ae74493f-92fd-45a3-a4ca-78630ba178f3" Dec 12 19:32:50.881000 audit[4995]: NETFILTER_CFG table=filter:142 family=2 entries=17 op=nft_register_rule pid=4995 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:50.881000 audit[4995]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffc2431130 a2=0 a3=7fffc243111c items=0 ppid=3062 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:50.881000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:50.893000 audit[4995]: NETFILTER_CFG table=nat:143 family=2 entries=47 op=nft_register_chain pid=4995 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:32:50.893000 audit[4995]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fffc2431130 a2=0 a3=7fffc243111c items=0 ppid=3062 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:32:50.893000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:32:57.213801 containerd[1670]: time="2025-12-12T19:32:57.213462137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 19:32:57.531532 containerd[1670]: time="2025-12-12T19:32:57.531362257Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:32:57.532546 containerd[1670]: time="2025-12-12T19:32:57.532485047Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 19:32:57.533151 containerd[1670]: time="2025-12-12T19:32:57.532523432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 19:32:57.533232 kubelet[2938]: E1212 19:32:57.532808 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:32:57.533232 kubelet[2938]: E1212 19:32:57.532864 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:32:57.533232 kubelet[2938]: E1212 19:32:57.533041 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pl7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-klld4_calico-system(3228a46d-97a1-46c2-a390-a03b4bb70892): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 19:32:57.536215 containerd[1670]: time="2025-12-12T19:32:57.536177289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 19:32:57.836948 containerd[1670]: time="2025-12-12T19:32:57.836826309Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:32:57.838639 containerd[1670]: time="2025-12-12T19:32:57.838473881Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 19:32:57.838943 containerd[1670]: time="2025-12-12T19:32:57.838502325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 19:32:57.839421 kubelet[2938]: E1212 19:32:57.839222 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:32:57.839608 kubelet[2938]: E1212 19:32:57.839516 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:32:57.840468 kubelet[2938]: E1212 19:32:57.840019 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pl7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-klld4_calico-system(3228a46d-97a1-46c2-a390-a03b4bb70892): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 19:32:57.841708 kubelet[2938]: E1212 19:32:57.841604 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:32:58.213364 containerd[1670]: time="2025-12-12T19:32:58.213004105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 19:32:58.515756 containerd[1670]: time="2025-12-12T19:32:58.514804659Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:32:58.516999 containerd[1670]: time="2025-12-12T19:32:58.516716017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 19:32:58.516999 containerd[1670]: time="2025-12-12T19:32:58.516677167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 19:32:58.517714 kubelet[2938]: E1212 19:32:58.517077 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:32:58.517714 kubelet[2938]: E1212 19:32:58.517146 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:32:58.517714 kubelet[2938]: E1212 19:32:58.517346 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:883a52629ac84d8fb872d294f44af2b8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7ljs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6595c59d9d-t84l2_calico-system(b2777763-2a14-4bdc-bc89-41885f08913f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 19:32:58.520841 containerd[1670]: time="2025-12-12T19:32:58.520671301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 19:32:58.832800 containerd[1670]: time="2025-12-12T19:32:58.832679908Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:32:58.835204 containerd[1670]: time="2025-12-12T19:32:58.835097405Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 19:32:58.835749 containerd[1670]: time="2025-12-12T19:32:58.835132985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 19:32:58.836206 kubelet[2938]: E1212 19:32:58.836114 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:32:58.837168 kubelet[2938]: E1212 19:32:58.836828 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:32:58.837630 kubelet[2938]: E1212 19:32:58.837399 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7ljs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6595c59d9d-t84l2_calico-system(b2777763-2a14-4bdc-bc89-41885f08913f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 19:32:58.839422 kubelet[2938]: E1212 19:32:58.839354 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6595c59d9d-t84l2" podUID="b2777763-2a14-4bdc-bc89-41885f08913f" Dec 12 19:32:59.214002 containerd[1670]: time="2025-12-12T19:32:59.213548397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 19:32:59.525980 containerd[1670]: time="2025-12-12T19:32:59.525619914Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:32:59.526932 containerd[1670]: time="2025-12-12T19:32:59.526871385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 19:32:59.527042 containerd[1670]: time="2025-12-12T19:32:59.526984395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 19:32:59.527403 kubelet[2938]: E1212 19:32:59.527328 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:32:59.527512 kubelet[2938]: E1212 19:32:59.527429 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:32:59.528700 kubelet[2938]: E1212 19:32:59.528595 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gsqn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5dd47b96bf-8h6g2_calico-system(efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 19:32:59.530054 kubelet[2938]: E1212 19:32:59.529995 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" podUID="efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac" Dec 12 19:33:02.213356 containerd[1670]: time="2025-12-12T19:33:02.213307069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:33:02.516186 containerd[1670]: time="2025-12-12T19:33:02.515648642Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:33:02.521517 containerd[1670]: time="2025-12-12T19:33:02.521425159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:33:02.522507 containerd[1670]: time="2025-12-12T19:33:02.521594789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 19:33:02.522597 kubelet[2938]: E1212 19:33:02.521818 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:33:02.522597 kubelet[2938]: E1212 19:33:02.521884 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:33:02.522597 kubelet[2938]: E1212 19:33:02.522081 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2sfrt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bf9fd49d-cblcg_calico-apiserver(bde65fa8-758f-4f39-b274-b1238cc0fdac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:33:02.524393 kubelet[2938]: E1212 19:33:02.523303 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" podUID="bde65fa8-758f-4f39-b274-b1238cc0fdac" Dec 12 19:33:03.212768 containerd[1670]: time="2025-12-12T19:33:03.212579021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:33:03.527932 containerd[1670]: time="2025-12-12T19:33:03.527751193Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:33:03.528650 containerd[1670]: time="2025-12-12T19:33:03.528553011Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:33:03.528761 containerd[1670]: time="2025-12-12T19:33:03.528682141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 19:33:03.529233 kubelet[2938]: E1212 19:33:03.528948 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:33:03.529233 kubelet[2938]: E1212 19:33:03.529005 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:33:03.529233 kubelet[2938]: E1212 19:33:03.529175 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pj2pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bf9fd49d-jx9n2_calico-apiserver(5344f808-2f57-4103-9d43-e41974952208): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:33:03.530565 kubelet[2938]: E1212 19:33:03.530522 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" podUID="5344f808-2f57-4103-9d43-e41974952208" Dec 12 19:33:04.214099 containerd[1670]: time="2025-12-12T19:33:04.213787437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 19:33:04.534991 containerd[1670]: time="2025-12-12T19:33:04.534859420Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:33:04.536222 containerd[1670]: time="2025-12-12T19:33:04.536065162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 19:33:04.536222 containerd[1670]: time="2025-12-12T19:33:04.536181798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 19:33:04.536519 kubelet[2938]: E1212 19:33:04.536425 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:33:04.536862 kubelet[2938]: E1212 19:33:04.536543 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:33:04.537682 kubelet[2938]: E1212 19:33:04.537591 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzgz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-26fhz_calico-system(ae74493f-92fd-45a3-a4ca-78630ba178f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 19:33:04.538826 kubelet[2938]: E1212 19:33:04.538773 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26fhz" podUID="ae74493f-92fd-45a3-a4ca-78630ba178f3" Dec 12 19:33:11.221834 kubelet[2938]: E1212 19:33:11.221658 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:33:14.214404 kubelet[2938]: E1212 19:33:14.214334 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6595c59d9d-t84l2" podUID="b2777763-2a14-4bdc-bc89-41885f08913f" Dec 12 19:33:15.214329 kubelet[2938]: E1212 19:33:15.214176 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" podUID="efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac" Dec 12 19:33:15.216535 kubelet[2938]: E1212 19:33:15.216351 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" podUID="bde65fa8-758f-4f39-b274-b1238cc0fdac" Dec 12 19:33:15.218370 kubelet[2938]: E1212 19:33:15.218334 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26fhz" podUID="ae74493f-92fd-45a3-a4ca-78630ba178f3" Dec 12 19:33:15.322831 update_engine[1643]: I20251212 19:33:15.322712 1643 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 12 19:33:15.322831 update_engine[1643]: I20251212 19:33:15.322819 1643 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 12 19:33:15.326653 update_engine[1643]: I20251212 19:33:15.326386 1643 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 12 19:33:15.328523 update_engine[1643]: I20251212 19:33:15.328414 1643 omaha_request_params.cc:62] Current group set to beta Dec 12 19:33:15.328677 update_engine[1643]: I20251212 19:33:15.328653 1643 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 12 19:33:15.328677 update_engine[1643]: I20251212 19:33:15.328668 1643 update_attempter.cc:643] Scheduling an action processor start. Dec 12 19:33:15.328767 update_engine[1643]: I20251212 19:33:15.328694 1643 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 12 19:33:15.328767 update_engine[1643]: I20251212 19:33:15.328763 1643 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 12 19:33:15.328864 update_engine[1643]: I20251212 19:33:15.328845 1643 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 12 19:33:15.328864 update_engine[1643]: I20251212 19:33:15.328858 1643 omaha_request_action.cc:272] Request: Dec 12 19:33:15.328864 update_engine[1643]: Dec 12 19:33:15.328864 update_engine[1643]: Dec 12 19:33:15.328864 update_engine[1643]: Dec 12 19:33:15.328864 update_engine[1643]: Dec 12 19:33:15.328864 update_engine[1643]: Dec 12 19:33:15.328864 update_engine[1643]: Dec 12 19:33:15.328864 update_engine[1643]: Dec 12 19:33:15.328864 update_engine[1643]: Dec 12 19:33:15.329269 update_engine[1643]: I20251212 19:33:15.328866 1643 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 19:33:15.363966 update_engine[1643]: I20251212 19:33:15.363660 1643 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 19:33:15.369552 update_engine[1643]: I20251212 19:33:15.368779 1643 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 19:33:15.388356 locksmithd[1693]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 12 19:33:15.400518 update_engine[1643]: E20251212 19:33:15.399850 1643 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (DNS server returned answer with no data) Dec 12 19:33:15.400518 update_engine[1643]: I20251212 19:33:15.400110 1643 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 12 19:33:16.214065 kubelet[2938]: E1212 19:33:16.213959 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" podUID="5344f808-2f57-4103-9d43-e41974952208" Dec 12 19:33:24.220402 containerd[1670]: time="2025-12-12T19:33:24.220308053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 19:33:24.537296 containerd[1670]: time="2025-12-12T19:33:24.536916386Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:33:24.539148 containerd[1670]: time="2025-12-12T19:33:24.539079197Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 19:33:24.539288 containerd[1670]: time="2025-12-12T19:33:24.539236874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 19:33:24.539773 kubelet[2938]: E1212 19:33:24.539709 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:33:24.540212 kubelet[2938]: E1212 19:33:24.539811 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:33:24.540212 kubelet[2938]: E1212 19:33:24.540020 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pl7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-klld4_calico-system(3228a46d-97a1-46c2-a390-a03b4bb70892): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 19:33:24.542631 containerd[1670]: time="2025-12-12T19:33:24.542590326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 19:33:24.856235 containerd[1670]: time="2025-12-12T19:33:24.856140065Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:33:24.857587 containerd[1670]: time="2025-12-12T19:33:24.857410073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 19:33:24.857587 containerd[1670]: time="2025-12-12T19:33:24.857464576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 19:33:24.857963 kubelet[2938]: E1212 19:33:24.857906 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:33:24.858053 kubelet[2938]: E1212 19:33:24.857973 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:33:24.858258 kubelet[2938]: E1212 19:33:24.858194 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pl7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-klld4_calico-system(3228a46d-97a1-46c2-a390-a03b4bb70892): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 19:33:24.859821 kubelet[2938]: E1212 19:33:24.859670 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:33:24.945317 kernel: kauditd_printk_skb: 214 callbacks suppressed Dec 12 19:33:24.945832 kernel: audit: type=1130 audit(1765568004.927:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.244.101.34:22-139.178.89.65:44010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:24.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.244.101.34:22-139.178.89.65:44010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:24.928852 systemd[1]: Started sshd@7-10.244.101.34:22-139.178.89.65:44010.service - OpenSSH per-connection server daemon (139.178.89.65:44010). Dec 12 19:33:25.219693 containerd[1670]: time="2025-12-12T19:33:25.219002170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 19:33:25.229171 update_engine[1643]: I20251212 19:33:25.228524 1643 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 19:33:25.229171 update_engine[1643]: I20251212 19:33:25.228655 1643 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 19:33:25.229171 update_engine[1643]: I20251212 19:33:25.229105 1643 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 19:33:25.233225 update_engine[1643]: E20251212 19:33:25.232774 1643 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (DNS server returned answer with no data) Dec 12 19:33:25.233225 update_engine[1643]: I20251212 19:33:25.232903 1643 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 12 19:33:25.534649 containerd[1670]: time="2025-12-12T19:33:25.533870531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:33:25.537843 containerd[1670]: time="2025-12-12T19:33:25.537763951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 19:33:25.539049 containerd[1670]: time="2025-12-12T19:33:25.538333562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 19:33:25.539415 kubelet[2938]: E1212 19:33:25.538791 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:33:25.539415 kubelet[2938]: E1212 19:33:25.538870 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:33:25.539415 kubelet[2938]: E1212 19:33:25.539686 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:883a52629ac84d8fb872d294f44af2b8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7ljs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6595c59d9d-t84l2_calico-system(b2777763-2a14-4bdc-bc89-41885f08913f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 19:33:25.543995 containerd[1670]: time="2025-12-12T19:33:25.543194612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 19:33:25.846158 containerd[1670]: time="2025-12-12T19:33:25.845645196Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:33:25.847603 containerd[1670]: time="2025-12-12T19:33:25.847390293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 19:33:25.848507 containerd[1670]: time="2025-12-12T19:33:25.847866185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 19:33:25.848627 kubelet[2938]: E1212 19:33:25.848109 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:33:25.848627 kubelet[2938]: E1212 19:33:25.848190 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:33:25.849752 kubelet[2938]: E1212 19:33:25.849517 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7ljs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6595c59d9d-t84l2_calico-system(b2777763-2a14-4bdc-bc89-41885f08913f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 19:33:25.850943 kubelet[2938]: E1212 19:33:25.850697 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6595c59d9d-t84l2" podUID="b2777763-2a14-4bdc-bc89-41885f08913f" Dec 12 19:33:25.866000 audit[5060]: USER_ACCT pid=5060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:25.887229 kernel: audit: type=1101 audit(1765568005.866:759): pid=5060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:25.901023 kernel: audit: type=1103 audit(1765568005.895:760): pid=5060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:25.895000 audit[5060]: CRED_ACQ pid=5060 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:25.902042 sshd[5060]: Accepted publickey for core from 139.178.89.65 port 44010 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:33:25.907829 kernel: audit: type=1006 audit(1765568005.895:761): pid=5060 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 12 19:33:25.907910 kernel: audit: type=1300 audit(1765568005.895:761): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc752244a0 a2=3 a3=0 items=0 ppid=1 pid=5060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:33:25.895000 audit[5060]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc752244a0 a2=3 a3=0 items=0 ppid=1 pid=5060 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:33:25.910271 kernel: audit: type=1327 audit(1765568005.895:761): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:33:25.895000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:33:25.909209 sshd-session[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:33:25.926892 systemd-logind[1642]: New session 10 of user core. Dec 12 19:33:25.934476 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 19:33:25.939000 audit[5060]: USER_START pid=5060 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:25.946809 kernel: audit: type=1105 audit(1765568005.939:762): pid=5060 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:25.944000 audit[5064]: CRED_ACQ pid=5064 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:25.954476 kernel: audit: type=1103 audit(1765568005.944:763): pid=5064 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:26.978243 sshd[5064]: Connection closed by 139.178.89.65 port 44010 Dec 12 19:33:26.978948 sshd-session[5060]: pam_unix(sshd:session): session closed for user core Dec 12 19:33:27.006089 kernel: audit: type=1106 audit(1765568006.992:764): pid=5060 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:26.992000 audit[5060]: USER_END pid=5060 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:27.033532 kernel: audit: type=1104 audit(1765568007.015:765): pid=5060 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:27.015000 audit[5060]: CRED_DISP pid=5060 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:27.041310 systemd[1]: sshd@7-10.244.101.34:22-139.178.89.65:44010.service: Deactivated successfully. Dec 12 19:33:27.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.244.101.34:22-139.178.89.65:44010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:27.052151 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 19:33:27.057134 systemd-logind[1642]: Session 10 logged out. Waiting for processes to exit. Dec 12 19:33:27.058867 systemd-logind[1642]: Removed session 10. Dec 12 19:33:28.214991 containerd[1670]: time="2025-12-12T19:33:28.214842612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 19:33:28.533394 containerd[1670]: time="2025-12-12T19:33:28.533217753Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:33:28.534394 containerd[1670]: time="2025-12-12T19:33:28.534345538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 19:33:28.534659 containerd[1670]: time="2025-12-12T19:33:28.534372307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 19:33:28.534816 kubelet[2938]: E1212 19:33:28.534750 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:33:28.535857 kubelet[2938]: E1212 19:33:28.534848 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:33:28.535857 kubelet[2938]: E1212 19:33:28.535180 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gsqn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5dd47b96bf-8h6g2_calico-system(efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 19:33:28.536081 containerd[1670]: time="2025-12-12T19:33:28.535674445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:33:28.536420 kubelet[2938]: E1212 19:33:28.536347 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" podUID="efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac" Dec 12 19:33:28.853108 containerd[1670]: time="2025-12-12T19:33:28.852667010Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:33:28.854387 containerd[1670]: time="2025-12-12T19:33:28.854022069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:33:28.855205 containerd[1670]: time="2025-12-12T19:33:28.854071110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 19:33:28.855277 kubelet[2938]: E1212 19:33:28.854793 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:33:28.855277 kubelet[2938]: E1212 19:33:28.854861 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:33:28.855277 kubelet[2938]: E1212 19:33:28.855068 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pj2pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bf9fd49d-jx9n2_calico-apiserver(5344f808-2f57-4103-9d43-e41974952208): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:33:28.856651 kubelet[2938]: E1212 19:33:28.856586 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" podUID="5344f808-2f57-4103-9d43-e41974952208" Dec 12 19:33:30.214462 containerd[1670]: time="2025-12-12T19:33:30.213914976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:33:30.531410 containerd[1670]: time="2025-12-12T19:33:30.531241959Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:33:30.532581 containerd[1670]: time="2025-12-12T19:33:30.532481684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:33:30.532581 containerd[1670]: time="2025-12-12T19:33:30.532535819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 19:33:30.532867 kubelet[2938]: E1212 19:33:30.532821 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:33:30.533271 kubelet[2938]: E1212 19:33:30.532909 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:33:30.533883 kubelet[2938]: E1212 19:33:30.533559 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2sfrt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bf9fd49d-cblcg_calico-apiserver(bde65fa8-758f-4f39-b274-b1238cc0fdac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:33:30.535602 kubelet[2938]: E1212 19:33:30.535271 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" podUID="bde65fa8-758f-4f39-b274-b1238cc0fdac" Dec 12 19:33:30.557872 containerd[1670]: time="2025-12-12T19:33:30.557763340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 19:33:30.869331 containerd[1670]: time="2025-12-12T19:33:30.869205196Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:33:30.870691 containerd[1670]: time="2025-12-12T19:33:30.870633759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 19:33:30.871927 kubelet[2938]: E1212 19:33:30.871554 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:33:30.872299 kubelet[2938]: E1212 19:33:30.872252 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:33:30.874263 kubelet[2938]: E1212 19:33:30.874166 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzgz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-26fhz_calico-system(ae74493f-92fd-45a3-a4ca-78630ba178f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 19:33:30.875525 kubelet[2938]: E1212 19:33:30.875491 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26fhz" podUID="ae74493f-92fd-45a3-a4ca-78630ba178f3" Dec 12 19:33:30.888106 containerd[1670]: time="2025-12-12T19:33:30.870729673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 19:33:32.132368 systemd[1]: Started sshd@8-10.244.101.34:22-139.178.89.65:39210.service - OpenSSH per-connection server daemon (139.178.89.65:39210). Dec 12 19:33:32.147744 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 19:33:32.147839 kernel: audit: type=1130 audit(1765568012.133:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.244.101.34:22-139.178.89.65:39210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:32.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.244.101.34:22-139.178.89.65:39210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:32.999000 audit[5080]: USER_ACCT pid=5080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:33.007498 sshd[5080]: Accepted publickey for core from 139.178.89.65 port 39210 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:33:33.011046 kernel: audit: type=1101 audit(1765568012.999:768): pid=5080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:33.014970 kernel: audit: type=1103 audit(1765568013.010:769): pid=5080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:33.010000 audit[5080]: CRED_ACQ pid=5080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:33.017459 kernel: audit: type=1006 audit(1765568013.010:770): pid=5080 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 12 19:33:33.019844 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:33:33.010000 audit[5080]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1b6c6b30 a2=3 a3=0 items=0 ppid=1 pid=5080 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:33:33.024529 kernel: audit: type=1300 audit(1765568013.010:770): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1b6c6b30 a2=3 a3=0 items=0 ppid=1 pid=5080 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:33:33.010000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:33:33.030545 kernel: audit: type=1327 audit(1765568013.010:770): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:33:33.034720 systemd-logind[1642]: New session 11 of user core. Dec 12 19:33:33.039701 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 19:33:33.044000 audit[5080]: USER_START pid=5080 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:33.049476 kernel: audit: type=1105 audit(1765568013.044:771): pid=5080 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:33.050000 audit[5083]: CRED_ACQ pid=5083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:33.054482 kernel: audit: type=1103 audit(1765568013.050:772): pid=5083 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:33.644471 sshd[5083]: Connection closed by 139.178.89.65 port 39210 Dec 12 19:33:33.645281 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Dec 12 19:33:33.646000 audit[5080]: USER_END pid=5080 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:33.656560 kernel: audit: type=1106 audit(1765568013.646:773): pid=5080 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:33.656000 audit[5080]: CRED_DISP pid=5080 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:33.667628 kernel: audit: type=1104 audit(1765568013.656:774): pid=5080 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:33.668481 systemd[1]: sshd@8-10.244.101.34:22-139.178.89.65:39210.service: Deactivated successfully. Dec 12 19:33:33.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.244.101.34:22-139.178.89.65:39210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:33.673628 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 19:33:33.675633 systemd-logind[1642]: Session 11 logged out. Waiting for processes to exit. Dec 12 19:33:33.678959 systemd-logind[1642]: Removed session 11. Dec 12 19:33:35.220857 update_engine[1643]: I20251212 19:33:35.220623 1643 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 19:33:35.221547 update_engine[1643]: I20251212 19:33:35.220881 1643 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 19:33:35.222961 update_engine[1643]: I20251212 19:33:35.222613 1643 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 19:33:35.223275 update_engine[1643]: E20251212 19:33:35.223230 1643 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (DNS server returned answer with no data) Dec 12 19:33:35.223549 update_engine[1643]: I20251212 19:33:35.223505 1643 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 12 19:33:38.218285 kubelet[2938]: E1212 19:33:38.218221 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:33:38.810468 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 19:33:38.810638 kernel: audit: type=1130 audit(1765568018.803:776): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.244.101.34:22-139.178.89.65:39218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:38.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.244.101.34:22-139.178.89.65:39218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:38.804960 systemd[1]: Started sshd@9-10.244.101.34:22-139.178.89.65:39218.service - OpenSSH per-connection server daemon (139.178.89.65:39218). Dec 12 19:33:39.215565 kubelet[2938]: E1212 19:33:39.215423 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6595c59d9d-t84l2" podUID="b2777763-2a14-4bdc-bc89-41885f08913f" Dec 12 19:33:39.650000 audit[5095]: USER_ACCT pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:39.657724 sshd[5095]: Accepted publickey for core from 139.178.89.65 port 39218 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:33:39.660553 kernel: audit: type=1101 audit(1765568019.650:777): pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:39.660000 audit[5095]: CRED_ACQ pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:39.662533 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:33:39.665718 kernel: audit: type=1103 audit(1765568019.660:778): pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:39.665808 kernel: audit: type=1006 audit(1765568019.660:779): pid=5095 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 12 19:33:39.660000 audit[5095]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff80598900 a2=3 a3=0 items=0 ppid=1 pid=5095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:33:39.668058 kernel: audit: type=1300 audit(1765568019.660:779): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff80598900 a2=3 a3=0 items=0 ppid=1 pid=5095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:33:39.660000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:33:39.672495 kernel: audit: type=1327 audit(1765568019.660:779): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:33:39.679269 systemd-logind[1642]: New session 12 of user core. Dec 12 19:33:39.686761 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 19:33:39.690000 audit[5095]: USER_START pid=5095 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:39.695470 kernel: audit: type=1105 audit(1765568019.690:780): pid=5095 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:39.695000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:39.700463 kernel: audit: type=1103 audit(1765568019.695:781): pid=5098 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:40.214370 kubelet[2938]: E1212 19:33:40.214297 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" podUID="5344f808-2f57-4103-9d43-e41974952208" Dec 12 19:33:40.261454 sshd[5098]: Connection closed by 139.178.89.65 port 39218 Dec 12 19:33:40.261930 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Dec 12 19:33:40.266000 audit[5095]: USER_END pid=5095 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:40.276792 kernel: audit: type=1106 audit(1765568020.266:782): pid=5095 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:40.279134 systemd[1]: sshd@9-10.244.101.34:22-139.178.89.65:39218.service: Deactivated successfully. Dec 12 19:33:40.272000 audit[5095]: CRED_DISP pid=5095 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:40.284905 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 19:33:40.287542 kernel: audit: type=1104 audit(1765568020.272:783): pid=5095 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:40.287072 systemd-logind[1642]: Session 12 logged out. Waiting for processes to exit. Dec 12 19:33:40.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.244.101.34:22-139.178.89.65:39218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:40.292688 systemd-logind[1642]: Removed session 12. Dec 12 19:33:43.217614 kubelet[2938]: E1212 19:33:43.217512 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" podUID="efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac" Dec 12 19:33:44.215740 kubelet[2938]: E1212 19:33:44.213286 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" podUID="bde65fa8-758f-4f39-b274-b1238cc0fdac" Dec 12 19:33:44.216317 kubelet[2938]: E1212 19:33:44.216282 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26fhz" podUID="ae74493f-92fd-45a3-a4ca-78630ba178f3" Dec 12 19:33:45.223594 update_engine[1643]: I20251212 19:33:45.223495 1643 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 19:33:45.224358 update_engine[1643]: I20251212 19:33:45.223636 1643 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 19:33:45.224358 update_engine[1643]: I20251212 19:33:45.224093 1643 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 19:33:45.224896 update_engine[1643]: E20251212 19:33:45.224853 1643 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (DNS server returned answer with no data) Dec 12 19:33:45.225014 update_engine[1643]: I20251212 19:33:45.224952 1643 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 12 19:33:45.225014 update_engine[1643]: I20251212 19:33:45.224965 1643 omaha_request_action.cc:617] Omaha request response: Dec 12 19:33:45.225155 update_engine[1643]: E20251212 19:33:45.225111 1643 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 12 19:33:45.225246 update_engine[1643]: I20251212 19:33:45.225222 1643 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 12 19:33:45.225246 update_engine[1643]: I20251212 19:33:45.225231 1643 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 12 19:33:45.225345 update_engine[1643]: I20251212 19:33:45.225240 1643 update_attempter.cc:306] Processing Done. Dec 12 19:33:45.225345 update_engine[1643]: E20251212 19:33:45.225270 1643 update_attempter.cc:619] Update failed. Dec 12 19:33:45.225345 update_engine[1643]: I20251212 19:33:45.225281 1643 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 12 19:33:45.225345 update_engine[1643]: I20251212 19:33:45.225287 1643 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 12 19:33:45.225345 update_engine[1643]: I20251212 19:33:45.225293 1643 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 12 19:33:45.226733 update_engine[1643]: I20251212 19:33:45.226330 1643 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 12 19:33:45.226733 update_engine[1643]: I20251212 19:33:45.226397 1643 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 12 19:33:45.226733 update_engine[1643]: I20251212 19:33:45.226406 1643 omaha_request_action.cc:272] Request: Dec 12 19:33:45.226733 update_engine[1643]: Dec 12 19:33:45.226733 update_engine[1643]: Dec 12 19:33:45.226733 update_engine[1643]: Dec 12 19:33:45.226733 update_engine[1643]: Dec 12 19:33:45.226733 update_engine[1643]: Dec 12 19:33:45.226733 update_engine[1643]: Dec 12 19:33:45.226733 update_engine[1643]: I20251212 19:33:45.226413 1643 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 19:33:45.226733 update_engine[1643]: I20251212 19:33:45.226468 1643 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 19:33:45.227423 update_engine[1643]: I20251212 19:33:45.226840 1643 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 19:33:45.227807 update_engine[1643]: E20251212 19:33:45.227699 1643 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (DNS server returned answer with no data) Dec 12 19:33:45.227992 update_engine[1643]: I20251212 19:33:45.227793 1643 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 12 19:33:45.227992 update_engine[1643]: I20251212 19:33:45.227846 1643 omaha_request_action.cc:617] Omaha request response: Dec 12 19:33:45.227992 update_engine[1643]: I20251212 19:33:45.227854 1643 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 12 19:33:45.227992 update_engine[1643]: I20251212 19:33:45.227860 1643 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 12 19:33:45.227992 update_engine[1643]: I20251212 19:33:45.227865 1643 update_attempter.cc:306] Processing Done. Dec 12 19:33:45.227992 update_engine[1643]: I20251212 19:33:45.227872 1643 update_attempter.cc:310] Error event sent. Dec 12 19:33:45.227992 update_engine[1643]: I20251212 19:33:45.227882 1643 update_check_scheduler.cc:74] Next update check in 47m29s Dec 12 19:33:45.229805 locksmithd[1693]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 12 19:33:45.230400 locksmithd[1693]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 12 19:33:45.426227 systemd[1]: Started sshd@10-10.244.101.34:22-139.178.89.65:55350.service - OpenSSH per-connection server daemon (139.178.89.65:55350). Dec 12 19:33:45.434291 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 19:33:45.434349 kernel: audit: type=1130 audit(1765568025.425:785): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.244.101.34:22-139.178.89.65:55350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:45.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.244.101.34:22-139.178.89.65:55350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:46.262000 audit[5137]: USER_ACCT pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:46.271814 kernel: audit: type=1101 audit(1765568026.262:786): pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:46.271884 sshd[5137]: Accepted publickey for core from 139.178.89.65 port 55350 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:33:46.273000 audit[5137]: CRED_ACQ pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:46.277563 kernel: audit: type=1103 audit(1765568026.273:787): pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:46.277809 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:33:46.281537 kernel: audit: type=1006 audit(1765568026.273:788): pid=5137 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 12 19:33:46.273000 audit[5137]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff571c66a0 a2=3 a3=0 items=0 ppid=1 pid=5137 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:33:46.286252 kernel: audit: type=1300 audit(1765568026.273:788): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff571c66a0 a2=3 a3=0 items=0 ppid=1 pid=5137 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:33:46.286690 kernel: audit: type=1327 audit(1765568026.273:788): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:33:46.273000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:33:46.294672 systemd-logind[1642]: New session 13 of user core. Dec 12 19:33:46.303872 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 19:33:46.309000 audit[5137]: USER_START pid=5137 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:46.314568 kernel: audit: type=1105 audit(1765568026.309:789): pid=5137 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:46.314000 audit[5140]: CRED_ACQ pid=5140 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:46.318457 kernel: audit: type=1103 audit(1765568026.314:790): pid=5140 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:46.947295 sshd[5140]: Connection closed by 139.178.89.65 port 55350 Dec 12 19:33:46.947861 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Dec 12 19:33:46.950000 audit[5137]: USER_END pid=5137 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:46.960965 kernel: audit: type=1106 audit(1765568026.950:791): pid=5137 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:46.950000 audit[5137]: CRED_DISP pid=5137 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:46.964359 systemd[1]: sshd@10-10.244.101.34:22-139.178.89.65:55350.service: Deactivated successfully. Dec 12 19:33:46.965588 kernel: audit: type=1104 audit(1765568026.950:792): pid=5137 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:46.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.244.101.34:22-139.178.89.65:55350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:46.971703 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 19:33:46.977981 systemd-logind[1642]: Session 13 logged out. Waiting for processes to exit. Dec 12 19:33:46.979686 systemd-logind[1642]: Removed session 13. Dec 12 19:33:47.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.244.101.34:22-139.178.89.65:55360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:47.107783 systemd[1]: Started sshd@11-10.244.101.34:22-139.178.89.65:55360.service - OpenSSH per-connection server daemon (139.178.89.65:55360). Dec 12 19:33:47.890000 audit[5154]: USER_ACCT pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:47.891150 sshd[5154]: Accepted publickey for core from 139.178.89.65 port 55360 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:33:47.893000 audit[5154]: CRED_ACQ pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:47.893000 audit[5154]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc49538700 a2=3 a3=0 items=0 ppid=1 pid=5154 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:33:47.893000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:33:47.894049 sshd-session[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:33:47.901785 systemd-logind[1642]: New session 14 of user core. Dec 12 19:33:47.908711 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 19:33:47.913000 audit[5154]: USER_START pid=5154 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:47.915000 audit[5157]: CRED_ACQ pid=5157 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:48.545857 sshd[5157]: Connection closed by 139.178.89.65 port 55360 Dec 12 19:33:48.545802 sshd-session[5154]: pam_unix(sshd:session): session closed for user core Dec 12 19:33:48.549000 audit[5154]: USER_END pid=5154 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:48.550000 audit[5154]: CRED_DISP pid=5154 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:48.554353 systemd[1]: sshd@11-10.244.101.34:22-139.178.89.65:55360.service: Deactivated successfully. Dec 12 19:33:48.554499 systemd-logind[1642]: Session 14 logged out. Waiting for processes to exit. Dec 12 19:33:48.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.244.101.34:22-139.178.89.65:55360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:48.558407 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 19:33:48.567812 systemd-logind[1642]: Removed session 14. Dec 12 19:33:48.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.244.101.34:22-139.178.89.65:55368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:48.705691 systemd[1]: Started sshd@12-10.244.101.34:22-139.178.89.65:55368.service - OpenSSH per-connection server daemon (139.178.89.65:55368). Dec 12 19:33:49.531000 audit[5168]: USER_ACCT pid=5168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:49.533861 sshd[5168]: Accepted publickey for core from 139.178.89.65 port 55368 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:33:49.534000 audit[5168]: CRED_ACQ pid=5168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:49.534000 audit[5168]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee31a4160 a2=3 a3=0 items=0 ppid=1 pid=5168 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:33:49.534000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:33:49.537830 sshd-session[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:33:49.546787 systemd-logind[1642]: New session 15 of user core. Dec 12 19:33:49.554769 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 19:33:49.559000 audit[5168]: USER_START pid=5168 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:49.563000 audit[5171]: CRED_ACQ pid=5171 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:50.202132 sshd[5171]: Connection closed by 139.178.89.65 port 55368 Dec 12 19:33:50.206102 sshd-session[5168]: pam_unix(sshd:session): session closed for user core Dec 12 19:33:50.217000 audit[5168]: USER_END pid=5168 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:50.220000 audit[5168]: CRED_DISP pid=5168 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:50.223510 kubelet[2938]: E1212 19:33:50.221206 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:33:50.226973 systemd[1]: sshd@12-10.244.101.34:22-139.178.89.65:55368.service: Deactivated successfully. Dec 12 19:33:50.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.244.101.34:22-139.178.89.65:55368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:50.230808 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 19:33:50.237716 systemd-logind[1642]: Session 15 logged out. Waiting for processes to exit. Dec 12 19:33:50.238577 systemd-logind[1642]: Removed session 15. Dec 12 19:33:53.216573 kubelet[2938]: E1212 19:33:53.216513 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6595c59d9d-t84l2" podUID="b2777763-2a14-4bdc-bc89-41885f08913f" Dec 12 19:33:54.215173 kubelet[2938]: E1212 19:33:54.214595 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" podUID="5344f808-2f57-4103-9d43-e41974952208" Dec 12 19:33:55.366545 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 19:33:55.366797 kernel: audit: type=1130 audit(1765568035.359:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.244.101.34:22-139.178.89.65:52944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:55.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.244.101.34:22-139.178.89.65:52944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:55.360192 systemd[1]: Started sshd@13-10.244.101.34:22-139.178.89.65:52944.service - OpenSSH per-connection server daemon (139.178.89.65:52944). Dec 12 19:33:56.184000 audit[5191]: USER_ACCT pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:56.192668 kernel: audit: type=1101 audit(1765568036.184:813): pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:56.192823 sshd[5191]: Accepted publickey for core from 139.178.89.65 port 52944 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:33:56.200000 audit[5191]: CRED_ACQ pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:56.203275 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:33:56.205477 kernel: audit: type=1103 audit(1765568036.200:814): pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:56.207459 kernel: audit: type=1006 audit(1765568036.200:815): pid=5191 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 12 19:33:56.200000 audit[5191]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc99c20070 a2=3 a3=0 items=0 ppid=1 pid=5191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:33:56.211454 kernel: audit: type=1300 audit(1765568036.200:815): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc99c20070 a2=3 a3=0 items=0 ppid=1 pid=5191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:33:56.200000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:33:56.215486 kernel: audit: type=1327 audit(1765568036.200:815): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:33:56.218134 kubelet[2938]: E1212 19:33:56.218061 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" podUID="bde65fa8-758f-4f39-b274-b1238cc0fdac" Dec 12 19:33:56.224521 systemd-logind[1642]: New session 16 of user core. Dec 12 19:33:56.232751 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 19:33:56.238000 audit[5191]: USER_START pid=5191 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:56.243472 kernel: audit: type=1105 audit(1765568036.238:816): pid=5191 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:56.245000 audit[5194]: CRED_ACQ pid=5194 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:56.250574 kernel: audit: type=1103 audit(1765568036.245:817): pid=5194 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:56.828472 sshd[5194]: Connection closed by 139.178.89.65 port 52944 Dec 12 19:33:56.833541 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Dec 12 19:33:56.839000 audit[5191]: USER_END pid=5191 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:56.853675 kernel: audit: type=1106 audit(1765568036.839:818): pid=5191 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:56.850000 audit[5191]: CRED_DISP pid=5191 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:56.865640 kernel: audit: type=1104 audit(1765568036.850:819): pid=5191 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:33:56.871396 systemd[1]: sshd@13-10.244.101.34:22-139.178.89.65:52944.service: Deactivated successfully. Dec 12 19:33:56.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.244.101.34:22-139.178.89.65:52944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:33:56.875393 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 19:33:56.877735 systemd-logind[1642]: Session 16 logged out. Waiting for processes to exit. Dec 12 19:33:56.880157 systemd-logind[1642]: Removed session 16. Dec 12 19:33:57.221505 kubelet[2938]: E1212 19:33:57.219848 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" podUID="efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac" Dec 12 19:33:59.217022 kubelet[2938]: E1212 19:33:59.216966 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26fhz" podUID="ae74493f-92fd-45a3-a4ca-78630ba178f3" Dec 12 19:34:01.222561 kubelet[2938]: E1212 19:34:01.222479 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:34:01.985100 systemd[1]: Started sshd@14-10.244.101.34:22-139.178.89.65:36794.service - OpenSSH per-connection server daemon (139.178.89.65:36794). Dec 12 19:34:01.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.244.101.34:22-139.178.89.65:36794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:01.989564 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 19:34:01.989713 kernel: audit: type=1130 audit(1765568041.985:821): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.244.101.34:22-139.178.89.65:36794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:02.805000 audit[5208]: USER_ACCT pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:02.811074 sshd[5208]: Accepted publickey for core from 139.178.89.65 port 36794 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:34:02.812470 kernel: audit: type=1101 audit(1765568042.805:822): pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:02.814811 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:34:02.812000 audit[5208]: CRED_ACQ pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:02.818577 kernel: audit: type=1103 audit(1765568042.812:823): pid=5208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:02.823559 kernel: audit: type=1006 audit(1765568042.812:824): pid=5208 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 12 19:34:02.812000 audit[5208]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb27f1ee0 a2=3 a3=0 items=0 ppid=1 pid=5208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:02.830748 kernel: audit: type=1300 audit(1765568042.812:824): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb27f1ee0 a2=3 a3=0 items=0 ppid=1 pid=5208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:02.833349 systemd-logind[1642]: New session 17 of user core. Dec 12 19:34:02.812000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:34:02.837539 kernel: audit: type=1327 audit(1765568042.812:824): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:34:02.838933 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 19:34:02.842000 audit[5208]: USER_START pid=5208 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:02.847497 kernel: audit: type=1105 audit(1765568042.842:825): pid=5208 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:02.850659 kernel: audit: type=1103 audit(1765568042.847:826): pid=5211 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:02.847000 audit[5211]: CRED_ACQ pid=5211 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:03.399228 sshd[5211]: Connection closed by 139.178.89.65 port 36794 Dec 12 19:34:03.399737 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Dec 12 19:34:03.402000 audit[5208]: USER_END pid=5208 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:03.411785 kernel: audit: type=1106 audit(1765568043.402:827): pid=5208 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:03.411975 kernel: audit: type=1104 audit(1765568043.402:828): pid=5208 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:03.402000 audit[5208]: CRED_DISP pid=5208 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:03.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.244.101.34:22-139.178.89.65:36794 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:03.414532 systemd-logind[1642]: Session 17 logged out. Waiting for processes to exit. Dec 12 19:34:03.414930 systemd[1]: sshd@14-10.244.101.34:22-139.178.89.65:36794.service: Deactivated successfully. Dec 12 19:34:03.418800 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 19:34:03.422632 systemd-logind[1642]: Removed session 17. Dec 12 19:34:05.244472 kubelet[2938]: E1212 19:34:05.242886 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" podUID="5344f808-2f57-4103-9d43-e41974952208" Dec 12 19:34:05.244472 kubelet[2938]: E1212 19:34:05.243151 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6595c59d9d-t84l2" podUID="b2777763-2a14-4bdc-bc89-41885f08913f" Dec 12 19:34:08.563062 systemd[1]: Started sshd@15-10.244.101.34:22-139.178.89.65:36798.service - OpenSSH per-connection server daemon (139.178.89.65:36798). Dec 12 19:34:08.573034 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 19:34:08.573343 kernel: audit: type=1130 audit(1765568048.562:830): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.244.101.34:22-139.178.89.65:36798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:08.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.244.101.34:22-139.178.89.65:36798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:09.214454 kubelet[2938]: E1212 19:34:09.214354 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" podUID="bde65fa8-758f-4f39-b274-b1238cc0fdac" Dec 12 19:34:09.401000 audit[5229]: USER_ACCT pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:09.401885 sshd[5229]: Accepted publickey for core from 139.178.89.65 port 36798 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:34:09.410363 sshd-session[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:34:09.410724 kernel: audit: type=1101 audit(1765568049.401:831): pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:09.417780 kernel: audit: type=1103 audit(1765568049.408:832): pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:09.408000 audit[5229]: CRED_ACQ pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:09.420663 kernel: audit: type=1006 audit(1765568049.409:833): pid=5229 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 12 19:34:09.409000 audit[5229]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc219d6830 a2=3 a3=0 items=0 ppid=1 pid=5229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:09.425123 kernel: audit: type=1300 audit(1765568049.409:833): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc219d6830 a2=3 a3=0 items=0 ppid=1 pid=5229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:09.425208 kernel: audit: type=1327 audit(1765568049.409:833): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:34:09.409000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:34:09.430448 systemd-logind[1642]: New session 18 of user core. Dec 12 19:34:09.440498 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 19:34:09.446000 audit[5229]: USER_START pid=5229 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:09.451638 kernel: audit: type=1105 audit(1765568049.446:834): pid=5229 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:09.452000 audit[5234]: CRED_ACQ pid=5234 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:09.457482 kernel: audit: type=1103 audit(1765568049.452:835): pid=5234 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:09.992979 sshd[5234]: Connection closed by 139.178.89.65 port 36798 Dec 12 19:34:09.994711 sshd-session[5229]: pam_unix(sshd:session): session closed for user core Dec 12 19:34:09.996000 audit[5229]: USER_END pid=5229 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:10.007847 kernel: audit: type=1106 audit(1765568049.996:836): pid=5229 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:10.004000 audit[5229]: CRED_DISP pid=5229 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:10.017463 kernel: audit: type=1104 audit(1765568050.004:837): pid=5229 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:10.019354 systemd[1]: sshd@15-10.244.101.34:22-139.178.89.65:36798.service: Deactivated successfully. Dec 12 19:34:10.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.244.101.34:22-139.178.89.65:36798 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:10.028218 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 19:34:10.036709 systemd-logind[1642]: Session 18 logged out. Waiting for processes to exit. Dec 12 19:34:10.041035 systemd-logind[1642]: Removed session 18. Dec 12 19:34:10.163365 systemd[1]: Started sshd@16-10.244.101.34:22-139.178.89.65:36802.service - OpenSSH per-connection server daemon (139.178.89.65:36802). Dec 12 19:34:10.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.244.101.34:22-139.178.89.65:36802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:10.940000 audit[5246]: USER_ACCT pid=5246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:10.941802 sshd[5246]: Accepted publickey for core from 139.178.89.65 port 36802 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:34:10.943000 audit[5246]: CRED_ACQ pid=5246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:10.943000 audit[5246]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff93c4bbc0 a2=3 a3=0 items=0 ppid=1 pid=5246 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:10.943000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:34:10.943948 sshd-session[5246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:34:10.952690 systemd-logind[1642]: New session 19 of user core. Dec 12 19:34:10.961821 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 19:34:10.965000 audit[5246]: USER_START pid=5246 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:10.969000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:11.218869 containerd[1670]: time="2025-12-12T19:34:11.218043255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 19:34:11.646819 containerd[1670]: time="2025-12-12T19:34:11.646717735Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:34:11.647881 containerd[1670]: time="2025-12-12T19:34:11.647832477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 19:34:11.648013 containerd[1670]: time="2025-12-12T19:34:11.647965245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 19:34:11.648778 kubelet[2938]: E1212 19:34:11.648275 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:34:11.648778 kubelet[2938]: E1212 19:34:11.648372 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 19:34:11.648778 kubelet[2938]: E1212 19:34:11.648683 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gsqn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5dd47b96bf-8h6g2_calico-system(efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 19:34:11.650890 kubelet[2938]: E1212 19:34:11.650828 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" podUID="efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac" Dec 12 19:34:11.699724 sshd[5249]: Connection closed by 139.178.89.65 port 36802 Dec 12 19:34:11.701745 sshd-session[5246]: pam_unix(sshd:session): session closed for user core Dec 12 19:34:11.709000 audit[5246]: USER_END pid=5246 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:11.709000 audit[5246]: CRED_DISP pid=5246 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:11.715696 systemd[1]: sshd@16-10.244.101.34:22-139.178.89.65:36802.service: Deactivated successfully. Dec 12 19:34:11.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.244.101.34:22-139.178.89.65:36802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:11.720351 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 19:34:11.736641 systemd-logind[1642]: Session 19 logged out. Waiting for processes to exit. Dec 12 19:34:11.740475 systemd-logind[1642]: Removed session 19. Dec 12 19:34:11.860959 systemd[1]: Started sshd@17-10.244.101.34:22-139.178.89.65:48470.service - OpenSSH per-connection server daemon (139.178.89.65:48470). Dec 12 19:34:11.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.244.101.34:22-139.178.89.65:48470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:12.717000 audit[5259]: USER_ACCT pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:12.720563 sshd[5259]: Accepted publickey for core from 139.178.89.65 port 48470 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:34:12.721000 audit[5259]: CRED_ACQ pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:12.721000 audit[5259]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdc9d7bc0 a2=3 a3=0 items=0 ppid=1 pid=5259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:12.721000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:34:12.724596 sshd-session[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:34:12.733802 systemd-logind[1642]: New session 20 of user core. Dec 12 19:34:12.739782 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 19:34:12.744000 audit[5259]: USER_START pid=5259 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:12.749000 audit[5262]: CRED_ACQ pid=5262 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:13.217396 containerd[1670]: time="2025-12-12T19:34:13.216996593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 19:34:13.552182 containerd[1670]: time="2025-12-12T19:34:13.552015357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:34:13.554565 containerd[1670]: time="2025-12-12T19:34:13.553256415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 19:34:13.554565 containerd[1670]: time="2025-12-12T19:34:13.553486472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 19:34:13.555085 kubelet[2938]: E1212 19:34:13.554998 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:34:13.556125 kubelet[2938]: E1212 19:34:13.555677 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 19:34:13.556125 kubelet[2938]: E1212 19:34:13.555941 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pl7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-klld4_calico-system(3228a46d-97a1-46c2-a390-a03b4bb70892): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 19:34:13.560010 containerd[1670]: time="2025-12-12T19:34:13.559973079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 19:34:13.881044 containerd[1670]: time="2025-12-12T19:34:13.880388031Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:34:13.884763 containerd[1670]: time="2025-12-12T19:34:13.884149971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 19:34:13.884763 containerd[1670]: time="2025-12-12T19:34:13.884171239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 19:34:13.885528 kubelet[2938]: E1212 19:34:13.885301 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:34:13.886414 kubelet[2938]: E1212 19:34:13.885998 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 19:34:13.886414 kubelet[2938]: E1212 19:34:13.886192 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pl7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-klld4_calico-system(3228a46d-97a1-46c2-a390-a03b4bb70892): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 19:34:13.888222 kubelet[2938]: E1212 19:34:13.887586 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:34:13.918000 audit[5296]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:34:13.933009 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 12 19:34:13.933124 kernel: audit: type=1325 audit(1765568053.918:854): table=filter:144 family=2 entries=26 op=nft_register_rule pid=5296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:34:13.933161 kernel: audit: type=1300 audit(1765568053.918:854): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffee0bc59b0 a2=0 a3=7ffee0bc599c items=0 ppid=3062 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:13.918000 audit[5296]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffee0bc59b0 a2=0 a3=7ffee0bc599c items=0 ppid=3062 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:13.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:34:13.941472 kernel: audit: type=1327 audit(1765568053.918:854): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:34:13.935000 audit[5296]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:34:13.944564 kernel: audit: type=1325 audit(1765568053.935:855): table=nat:145 family=2 entries=20 op=nft_register_rule pid=5296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:34:13.935000 audit[5296]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffee0bc59b0 a2=0 a3=0 items=0 ppid=3062 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:13.950473 kernel: audit: type=1300 audit(1765568053.935:855): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffee0bc59b0 a2=0 a3=0 items=0 ppid=3062 pid=5296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:13.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:34:13.953520 kernel: audit: type=1327 audit(1765568053.935:855): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:34:13.961000 audit[5298]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:34:13.965554 kernel: audit: type=1325 audit(1765568053.961:856): table=filter:146 family=2 entries=38 op=nft_register_rule pid=5298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:34:13.961000 audit[5298]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcc3a6b3f0 a2=0 a3=7ffcc3a6b3dc items=0 ppid=3062 pid=5298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:13.970529 kernel: audit: type=1300 audit(1765568053.961:856): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcc3a6b3f0 a2=0 a3=7ffcc3a6b3dc items=0 ppid=3062 pid=5298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:13.961000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:34:13.974470 kernel: audit: type=1327 audit(1765568053.961:856): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:34:13.967000 audit[5298]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:34:13.977520 kernel: audit: type=1325 audit(1765568053.967:857): table=nat:147 family=2 entries=20 op=nft_register_rule pid=5298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:34:13.967000 audit[5298]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcc3a6b3f0 a2=0 a3=0 items=0 ppid=3062 pid=5298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:13.967000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:34:14.077458 sshd[5262]: Connection closed by 139.178.89.65 port 48470 Dec 12 19:34:14.078485 sshd-session[5259]: pam_unix(sshd:session): session closed for user core Dec 12 19:34:14.081000 audit[5259]: USER_END pid=5259 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:14.081000 audit[5259]: CRED_DISP pid=5259 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:14.088614 systemd[1]: sshd@17-10.244.101.34:22-139.178.89.65:48470.service: Deactivated successfully. Dec 12 19:34:14.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.244.101.34:22-139.178.89.65:48470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:14.091918 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 19:34:14.099067 systemd-logind[1642]: Session 20 logged out. Waiting for processes to exit. Dec 12 19:34:14.100658 systemd-logind[1642]: Removed session 20. Dec 12 19:34:14.214197 containerd[1670]: time="2025-12-12T19:34:14.213822260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 19:34:14.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.244.101.34:22-139.178.89.65:48484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:14.237832 systemd[1]: Started sshd@18-10.244.101.34:22-139.178.89.65:48484.service - OpenSSH per-connection server daemon (139.178.89.65:48484). Dec 12 19:34:14.531704 containerd[1670]: time="2025-12-12T19:34:14.531530130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:34:14.532763 containerd[1670]: time="2025-12-12T19:34:14.532458745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 19:34:14.532763 containerd[1670]: time="2025-12-12T19:34:14.532571637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 19:34:14.533112 kubelet[2938]: E1212 19:34:14.533047 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:34:14.533189 kubelet[2938]: E1212 19:34:14.533136 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 19:34:14.533451 kubelet[2938]: E1212 19:34:14.533377 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzgz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-26fhz_calico-system(ae74493f-92fd-45a3-a4ca-78630ba178f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 19:34:14.534933 kubelet[2938]: E1212 19:34:14.534879 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26fhz" podUID="ae74493f-92fd-45a3-a4ca-78630ba178f3" Dec 12 19:34:15.050000 audit[5304]: USER_ACCT pid=5304 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:15.052349 sshd[5304]: Accepted publickey for core from 139.178.89.65 port 48484 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:34:15.052000 audit[5304]: CRED_ACQ pid=5304 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:15.052000 audit[5304]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2d8dc740 a2=3 a3=0 items=0 ppid=1 pid=5304 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:15.052000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:34:15.055538 sshd-session[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:34:15.065534 systemd-logind[1642]: New session 21 of user core. Dec 12 19:34:15.069979 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 19:34:15.073000 audit[5304]: USER_START pid=5304 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:15.076000 audit[5307]: CRED_ACQ pid=5307 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:16.054993 sshd[5307]: Connection closed by 139.178.89.65 port 48484 Dec 12 19:34:16.055390 sshd-session[5304]: pam_unix(sshd:session): session closed for user core Dec 12 19:34:16.058000 audit[5304]: USER_END pid=5304 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:16.058000 audit[5304]: CRED_DISP pid=5304 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:16.064675 systemd-logind[1642]: Session 21 logged out. Waiting for processes to exit. Dec 12 19:34:16.064970 systemd[1]: sshd@18-10.244.101.34:22-139.178.89.65:48484.service: Deactivated successfully. Dec 12 19:34:16.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.244.101.34:22-139.178.89.65:48484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:16.070227 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 19:34:16.076974 systemd-logind[1642]: Removed session 21. Dec 12 19:34:16.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.244.101.34:22-139.178.89.65:48492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:16.216173 systemd[1]: Started sshd@19-10.244.101.34:22-139.178.89.65:48492.service - OpenSSH per-connection server daemon (139.178.89.65:48492). Dec 12 19:34:16.218258 containerd[1670]: time="2025-12-12T19:34:16.217323242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 19:34:16.549757 containerd[1670]: time="2025-12-12T19:34:16.549691081Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:34:16.550628 containerd[1670]: time="2025-12-12T19:34:16.550591569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 19:34:16.550742 containerd[1670]: time="2025-12-12T19:34:16.550696627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 19:34:16.552793 kubelet[2938]: E1212 19:34:16.552737 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:34:16.553487 kubelet[2938]: E1212 19:34:16.552825 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 19:34:16.553487 kubelet[2938]: E1212 19:34:16.553421 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:883a52629ac84d8fb872d294f44af2b8,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7ljs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6595c59d9d-t84l2_calico-system(b2777763-2a14-4bdc-bc89-41885f08913f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 19:34:16.556953 containerd[1670]: time="2025-12-12T19:34:16.556727334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 19:34:16.888415 containerd[1670]: time="2025-12-12T19:34:16.888342356Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:34:16.895669 containerd[1670]: time="2025-12-12T19:34:16.895606335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 19:34:16.899596 containerd[1670]: time="2025-12-12T19:34:16.896121437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 19:34:16.900255 kubelet[2938]: E1212 19:34:16.900183 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:34:16.900918 kubelet[2938]: E1212 19:34:16.900886 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 19:34:16.904977 kubelet[2938]: E1212 19:34:16.904914 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7ljs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6595c59d9d-t84l2_calico-system(b2777763-2a14-4bdc-bc89-41885f08913f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 19:34:16.906201 kubelet[2938]: E1212 19:34:16.906153 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6595c59d9d-t84l2" podUID="b2777763-2a14-4bdc-bc89-41885f08913f" Dec 12 19:34:17.038000 audit[5317]: USER_ACCT pid=5317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:17.040991 sshd[5317]: Accepted publickey for core from 139.178.89.65 port 48492 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:34:17.041000 audit[5317]: CRED_ACQ pid=5317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:17.041000 audit[5317]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefa341080 a2=3 a3=0 items=0 ppid=1 pid=5317 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:17.041000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:34:17.044032 sshd-session[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:34:17.058886 systemd-logind[1642]: New session 22 of user core. Dec 12 19:34:17.069090 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 19:34:17.080000 audit[5317]: USER_START pid=5317 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:17.085000 audit[5320]: CRED_ACQ pid=5320 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:17.623387 sshd[5320]: Connection closed by 139.178.89.65 port 48492 Dec 12 19:34:17.624040 sshd-session[5317]: pam_unix(sshd:session): session closed for user core Dec 12 19:34:17.626000 audit[5317]: USER_END pid=5317 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:17.627000 audit[5317]: CRED_DISP pid=5317 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:17.633330 systemd[1]: sshd@19-10.244.101.34:22-139.178.89.65:48492.service: Deactivated successfully. Dec 12 19:34:17.633946 systemd-logind[1642]: Session 22 logged out. Waiting for processes to exit. Dec 12 19:34:17.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.244.101.34:22-139.178.89.65:48492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:17.637399 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 19:34:17.640394 systemd-logind[1642]: Removed session 22. Dec 12 19:34:18.215928 containerd[1670]: time="2025-12-12T19:34:18.215845319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:34:18.570727 containerd[1670]: time="2025-12-12T19:34:18.570278664Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:34:18.571662 containerd[1670]: time="2025-12-12T19:34:18.571530646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:34:18.571662 containerd[1670]: time="2025-12-12T19:34:18.571577820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 19:34:18.574194 kubelet[2938]: E1212 19:34:18.574144 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:34:18.574636 kubelet[2938]: E1212 19:34:18.574222 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:34:18.575485 kubelet[2938]: E1212 19:34:18.574410 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pj2pl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bf9fd49d-jx9n2_calico-apiserver(5344f808-2f57-4103-9d43-e41974952208): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:34:18.576666 kubelet[2938]: E1212 19:34:18.576630 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" podUID="5344f808-2f57-4103-9d43-e41974952208" Dec 12 19:34:20.214629 containerd[1670]: time="2025-12-12T19:34:20.214562537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 19:34:20.517317 containerd[1670]: time="2025-12-12T19:34:20.516379221Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 19:34:20.517499 containerd[1670]: time="2025-12-12T19:34:20.517262077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 19:34:20.517734 containerd[1670]: time="2025-12-12T19:34:20.517429474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 19:34:20.518545 kubelet[2938]: E1212 19:34:20.517869 2938 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:34:20.518545 kubelet[2938]: E1212 19:34:20.517950 2938 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 19:34:20.518545 kubelet[2938]: E1212 19:34:20.518223 2938 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2sfrt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58bf9fd49d-cblcg_calico-apiserver(bde65fa8-758f-4f39-b274-b1238cc0fdac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 19:34:20.520322 kubelet[2938]: E1212 19:34:20.519607 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" podUID="bde65fa8-758f-4f39-b274-b1238cc0fdac" Dec 12 19:34:22.801664 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 12 19:34:22.807844 kernel: audit: type=1130 audit(1765568062.788:879): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.244.101.34:22-139.178.89.65:55416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:22.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.244.101.34:22-139.178.89.65:55416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:22.790041 systemd[1]: Started sshd@20-10.244.101.34:22-139.178.89.65:55416.service - OpenSSH per-connection server daemon (139.178.89.65:55416). Dec 12 19:34:23.220540 kubelet[2938]: E1212 19:34:23.220026 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" podUID="efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac" Dec 12 19:34:23.645147 sshd[5353]: Accepted publickey for core from 139.178.89.65 port 55416 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:34:23.643000 audit[5353]: USER_ACCT pid=5353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:23.651219 sshd-session[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:34:23.652498 kernel: audit: type=1101 audit(1765568063.643:880): pid=5353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:23.648000 audit[5353]: CRED_ACQ pid=5353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:23.661460 kernel: audit: type=1103 audit(1765568063.648:881): pid=5353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:23.670070 kernel: audit: type=1006 audit(1765568063.648:882): pid=5353 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 12 19:34:23.670203 kernel: audit: type=1300 audit(1765568063.648:882): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4718af50 a2=3 a3=0 items=0 ppid=1 pid=5353 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:23.648000 audit[5353]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4718af50 a2=3 a3=0 items=0 ppid=1 pid=5353 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:23.675152 systemd-logind[1642]: New session 23 of user core. Dec 12 19:34:23.648000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:34:23.677468 kernel: audit: type=1327 audit(1765568063.648:882): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:34:23.679452 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 19:34:23.684000 audit[5353]: USER_START pid=5353 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:23.691476 kernel: audit: type=1105 audit(1765568063.684:883): pid=5353 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:23.692000 audit[5356]: CRED_ACQ pid=5356 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:23.697577 kernel: audit: type=1103 audit(1765568063.692:884): pid=5356 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:24.229258 sshd[5356]: Connection closed by 139.178.89.65 port 55416 Dec 12 19:34:24.229806 sshd-session[5353]: pam_unix(sshd:session): session closed for user core Dec 12 19:34:24.231000 audit[5353]: USER_END pid=5353 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:24.236893 systemd[1]: sshd@20-10.244.101.34:22-139.178.89.65:55416.service: Deactivated successfully. Dec 12 19:34:24.239814 kernel: audit: type=1106 audit(1765568064.231:885): pid=5353 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:24.239909 kernel: audit: type=1104 audit(1765568064.231:886): pid=5353 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:24.231000 audit[5353]: CRED_DISP pid=5353 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:24.238129 systemd-logind[1642]: Session 23 logged out. Waiting for processes to exit. Dec 12 19:34:24.243407 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 19:34:24.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.244.101.34:22-139.178.89.65:55416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:24.248924 systemd-logind[1642]: Removed session 23. Dec 12 19:34:25.822000 audit[5368]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:34:25.822000 audit[5368]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffc898bb30 a2=0 a3=7fffc898bb1c items=0 ppid=3062 pid=5368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:25.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:34:25.830000 audit[5368]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 19:34:25.830000 audit[5368]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffc898bb30 a2=0 a3=7fffc898bb1c items=0 ppid=3062 pid=5368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:25.830000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 19:34:29.221219 kubelet[2938]: E1212 19:34:29.221113 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-klld4" podUID="3228a46d-97a1-46c2-a390-a03b4bb70892" Dec 12 19:34:29.224025 kubelet[2938]: E1212 19:34:29.221426 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-jx9n2" podUID="5344f808-2f57-4103-9d43-e41974952208" Dec 12 19:34:29.389808 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 19:34:29.390100 kernel: audit: type=1130 audit(1765568069.385:890): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.244.101.34:22-139.178.89.65:55426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:29.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.244.101.34:22-139.178.89.65:55426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:29.387076 systemd[1]: Started sshd@21-10.244.101.34:22-139.178.89.65:55426.service - OpenSSH per-connection server daemon (139.178.89.65:55426). Dec 12 19:34:30.192000 audit[5370]: USER_ACCT pid=5370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:30.199242 sshd[5370]: Accepted publickey for core from 139.178.89.65 port 55426 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:34:30.199808 kernel: audit: type=1101 audit(1765568070.192:891): pid=5370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:30.199764 sshd-session[5370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:34:30.198000 audit[5370]: CRED_ACQ pid=5370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:30.208495 kernel: audit: type=1103 audit(1765568070.198:892): pid=5370 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:30.211620 kernel: audit: type=1006 audit(1765568070.198:893): pid=5370 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 12 19:34:30.211707 kernel: audit: type=1300 audit(1765568070.198:893): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff84fa20f0 a2=3 a3=0 items=0 ppid=1 pid=5370 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:30.198000 audit[5370]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff84fa20f0 a2=3 a3=0 items=0 ppid=1 pid=5370 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:30.216417 kernel: audit: type=1327 audit(1765568070.198:893): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:34:30.198000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:34:30.215991 systemd-logind[1642]: New session 24 of user core. Dec 12 19:34:30.219380 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 19:34:30.220772 kubelet[2938]: E1212 19:34:30.220707 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-26fhz" podUID="ae74493f-92fd-45a3-a4ca-78630ba178f3" Dec 12 19:34:30.229983 kernel: audit: type=1105 audit(1765568070.225:894): pid=5370 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:30.225000 audit[5370]: USER_START pid=5370 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:30.237461 kernel: audit: type=1103 audit(1765568070.232:895): pid=5373 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:30.232000 audit[5373]: CRED_ACQ pid=5373 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:30.775947 sshd[5373]: Connection closed by 139.178.89.65 port 55426 Dec 12 19:34:30.776699 sshd-session[5370]: pam_unix(sshd:session): session closed for user core Dec 12 19:34:30.779000 audit[5370]: USER_END pid=5370 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:30.788609 kernel: audit: type=1106 audit(1765568070.779:896): pid=5370 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:30.788786 kernel: audit: type=1104 audit(1765568070.779:897): pid=5370 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:30.779000 audit[5370]: CRED_DISP pid=5370 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:30.796857 systemd-logind[1642]: Session 24 logged out. Waiting for processes to exit. Dec 12 19:34:30.797854 systemd[1]: sshd@21-10.244.101.34:22-139.178.89.65:55426.service: Deactivated successfully. Dec 12 19:34:30.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.244.101.34:22-139.178.89.65:55426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:30.803319 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 19:34:30.811324 systemd-logind[1642]: Removed session 24. Dec 12 19:34:31.215454 kubelet[2938]: E1212 19:34:31.215347 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6595c59d9d-t84l2" podUID="b2777763-2a14-4bdc-bc89-41885f08913f" Dec 12 19:34:35.218994 kubelet[2938]: E1212 19:34:35.218555 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dd47b96bf-8h6g2" podUID="efa1b34a-a9a0-4c65-83a8-6ddcfdeb4bac" Dec 12 19:34:35.218994 kubelet[2938]: E1212 19:34:35.218654 2938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58bf9fd49d-cblcg" podUID="bde65fa8-758f-4f39-b274-b1238cc0fdac" Dec 12 19:34:35.934657 systemd[1]: Started sshd@22-10.244.101.34:22-139.178.89.65:33716.service - OpenSSH per-connection server daemon (139.178.89.65:33716). Dec 12 19:34:35.941979 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 19:34:35.942089 kernel: audit: type=1130 audit(1765568075.936:899): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.244.101.34:22-139.178.89.65:33716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:35.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.244.101.34:22-139.178.89.65:33716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:36.768744 kernel: audit: type=1101 audit(1765568076.754:900): pid=5387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:36.754000 audit[5387]: USER_ACCT pid=5387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:36.777418 kernel: audit: type=1103 audit(1765568076.770:901): pid=5387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:36.777542 kernel: audit: type=1006 audit(1765568076.770:902): pid=5387 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 12 19:34:36.770000 audit[5387]: CRED_ACQ pid=5387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:36.771705 sshd-session[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 19:34:36.781132 kernel: audit: type=1300 audit(1765568076.770:902): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb5414130 a2=3 a3=0 items=0 ppid=1 pid=5387 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:36.770000 audit[5387]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb5414130 a2=3 a3=0 items=0 ppid=1 pid=5387 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 19:34:36.781564 sshd[5387]: Accepted publickey for core from 139.178.89.65 port 33716 ssh2: RSA SHA256:U/AC7kAb2Y9b6JBzh4Bej2PrOsGgOl21u8JoE/k+m8U Dec 12 19:34:36.770000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:34:36.787459 kernel: audit: type=1327 audit(1765568076.770:902): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 19:34:36.791280 systemd-logind[1642]: New session 25 of user core. Dec 12 19:34:36.798733 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 19:34:36.804000 audit[5387]: USER_START pid=5387 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:36.808915 kernel: audit: type=1105 audit(1765568076.804:903): pid=5387 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:36.809000 audit[5390]: CRED_ACQ pid=5390 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:36.813465 kernel: audit: type=1103 audit(1765568076.809:904): pid=5390 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:37.345483 sshd[5390]: Connection closed by 139.178.89.65 port 33716 Dec 12 19:34:37.345289 sshd-session[5387]: pam_unix(sshd:session): session closed for user core Dec 12 19:34:37.349000 audit[5387]: USER_END pid=5387 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:37.362382 kernel: audit: type=1106 audit(1765568077.349:905): pid=5387 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:37.363417 systemd[1]: sshd@22-10.244.101.34:22-139.178.89.65:33716.service: Deactivated successfully. Dec 12 19:34:37.349000 audit[5387]: CRED_DISP pid=5387 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:37.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.244.101.34:22-139.178.89.65:33716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 19:34:37.369706 kernel: audit: type=1104 audit(1765568077.349:906): pid=5387 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 19:34:37.370349 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 19:34:37.374493 systemd-logind[1642]: Session 25 logged out. Waiting for processes to exit. Dec 12 19:34:37.376773 systemd-logind[1642]: Removed session 25.