Dec 16 03:50:33.292384 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:18:19 -00 2025 Dec 16 03:50:33.292441 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:50:33.292459 kernel: BIOS-provided physical RAM map: Dec 16 03:50:33.292475 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 03:50:33.292493 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 03:50:33.292505 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 03:50:33.292518 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 16 03:50:33.292543 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 16 03:50:33.292556 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 03:50:33.292568 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 03:50:33.292580 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 03:50:33.292591 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 03:50:33.292603 kernel: NX (Execute Disable) protection: active Dec 16 03:50:33.292620 kernel: APIC: Static calls initialized Dec 16 03:50:33.292633 kernel: SMBIOS 2.8 present. Dec 16 03:50:33.292646 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 16 03:50:33.292659 kernel: DMI: Memory slots populated: 1/1 Dec 16 03:50:33.292676 kernel: Hypervisor detected: KVM Dec 16 03:50:33.292688 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 16 03:50:33.292700 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 03:50:33.292713 kernel: kvm-clock: using sched offset of 5115428899 cycles Dec 16 03:50:33.292754 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 03:50:33.292768 kernel: tsc: Detected 2499.998 MHz processor Dec 16 03:50:33.292782 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 03:50:33.292795 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 03:50:33.292814 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 16 03:50:33.292827 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 03:50:33.292840 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 03:50:33.292852 kernel: Using GB pages for direct mapping Dec 16 03:50:33.292865 kernel: ACPI: Early table checksum verification disabled Dec 16 03:50:33.292878 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 16 03:50:33.292890 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:50:33.292903 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:50:33.292921 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:50:33.292946 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 16 03:50:33.292959 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:50:33.292972 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:50:33.292985 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:50:33.292997 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:50:33.293010 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 16 03:50:33.293033 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 16 03:50:33.293046 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 16 03:50:33.293059 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 16 03:50:33.293073 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 16 03:50:33.293091 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 16 03:50:33.293104 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 16 03:50:33.293117 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 03:50:33.293130 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 16 03:50:33.293144 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 16 03:50:33.293157 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Dec 16 03:50:33.293171 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Dec 16 03:50:33.293188 kernel: Zone ranges: Dec 16 03:50:33.293202 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 03:50:33.293215 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 16 03:50:33.293228 kernel: Normal empty Dec 16 03:50:33.293240 kernel: Device empty Dec 16 03:50:33.293254 kernel: Movable zone start for each node Dec 16 03:50:33.293267 kernel: Early memory node ranges Dec 16 03:50:33.293280 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 03:50:33.293297 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 16 03:50:33.293310 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 16 03:50:33.293323 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 03:50:33.293336 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 03:50:33.293350 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 16 03:50:33.293363 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 03:50:33.293382 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 03:50:33.293401 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 03:50:33.293415 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 03:50:33.293428 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 03:50:33.293442 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 03:50:33.293455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 03:50:33.293468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 03:50:33.293482 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 03:50:33.293495 kernel: TSC deadline timer available Dec 16 03:50:33.293513 kernel: CPU topo: Max. logical packages: 16 Dec 16 03:50:33.293526 kernel: CPU topo: Max. logical dies: 16 Dec 16 03:50:33.293540 kernel: CPU topo: Max. dies per package: 1 Dec 16 03:50:33.293553 kernel: CPU topo: Max. threads per core: 1 Dec 16 03:50:33.293567 kernel: CPU topo: Num. cores per package: 1 Dec 16 03:50:33.293580 kernel: CPU topo: Num. threads per package: 1 Dec 16 03:50:33.293593 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Dec 16 03:50:33.293611 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 03:50:33.293624 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 03:50:33.293638 kernel: Booting paravirtualized kernel on KVM Dec 16 03:50:33.293652 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 03:50:33.293665 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 16 03:50:33.293679 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Dec 16 03:50:33.293692 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Dec 16 03:50:33.293710 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 16 03:50:33.293772 kernel: kvm-guest: PV spinlocks enabled Dec 16 03:50:33.293792 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 03:50:33.293814 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:50:33.293828 kernel: random: crng init done Dec 16 03:50:33.293842 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 03:50:33.293855 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 03:50:33.293875 kernel: Fallback order for Node 0: 0 Dec 16 03:50:33.293889 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Dec 16 03:50:33.293902 kernel: Policy zone: DMA32 Dec 16 03:50:33.293915 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 03:50:33.293939 kernel: software IO TLB: area num 16. Dec 16 03:50:33.293954 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 16 03:50:33.293967 kernel: Kernel/User page tables isolation: enabled Dec 16 03:50:33.293986 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 03:50:33.293999 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 03:50:33.294012 kernel: Dynamic Preempt: voluntary Dec 16 03:50:33.294025 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 03:50:33.294044 kernel: rcu: RCU event tracing is enabled. Dec 16 03:50:33.294059 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 16 03:50:33.294072 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 03:50:33.294090 kernel: Rude variant of Tasks RCU enabled. Dec 16 03:50:33.294103 kernel: Tracing variant of Tasks RCU enabled. Dec 16 03:50:33.294116 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 03:50:33.294130 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 16 03:50:33.294143 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 03:50:33.294157 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 03:50:33.294170 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 03:50:33.294183 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 16 03:50:33.294201 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 03:50:33.294225 kernel: Console: colour VGA+ 80x25 Dec 16 03:50:33.294242 kernel: printk: legacy console [tty0] enabled Dec 16 03:50:33.294256 kernel: printk: legacy console [ttyS0] enabled Dec 16 03:50:33.294276 kernel: ACPI: Core revision 20240827 Dec 16 03:50:33.294291 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 03:50:33.294305 kernel: x2apic enabled Dec 16 03:50:33.294319 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 03:50:33.294333 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 16 03:50:33.294352 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 16 03:50:33.294366 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 03:50:33.294380 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 03:50:33.294394 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 03:50:33.294411 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 03:50:33.294425 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 03:50:33.294439 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 03:50:33.294452 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 16 03:50:33.294466 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 03:50:33.294479 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 03:50:33.294493 kernel: MDS: Mitigation: Clear CPU buffers Dec 16 03:50:33.294506 kernel: MMIO Stale Data: Unknown: No mitigations Dec 16 03:50:33.294520 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 16 03:50:33.294533 kernel: active return thunk: its_return_thunk Dec 16 03:50:33.294546 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 03:50:33.294565 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 03:50:33.294579 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 03:50:33.294593 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 03:50:33.294607 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 03:50:33.294621 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 16 03:50:33.294634 kernel: Freeing SMP alternatives memory: 32K Dec 16 03:50:33.294648 kernel: pid_max: default: 32768 minimum: 301 Dec 16 03:50:33.294661 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 03:50:33.294675 kernel: landlock: Up and running. Dec 16 03:50:33.294688 kernel: SELinux: Initializing. Dec 16 03:50:33.294706 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 03:50:33.294753 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 03:50:33.294768 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 16 03:50:33.294783 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 16 03:50:33.294797 kernel: signal: max sigframe size: 1776 Dec 16 03:50:33.294811 kernel: rcu: Hierarchical SRCU implementation. Dec 16 03:50:33.294825 kernel: rcu: Max phase no-delay instances is 400. Dec 16 03:50:33.294839 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Dec 16 03:50:33.294859 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 03:50:33.294876 kernel: smp: Bringing up secondary CPUs ... Dec 16 03:50:33.294890 kernel: smpboot: x86: Booting SMP configuration: Dec 16 03:50:33.294904 kernel: .... node #0, CPUs: #1 Dec 16 03:50:33.294918 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 03:50:33.294952 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 16 03:50:33.294968 kernel: Memory: 1912060K/2096616K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15556K init, 2484K bss, 178540K reserved, 0K cma-reserved) Dec 16 03:50:33.294987 kernel: devtmpfs: initialized Dec 16 03:50:33.295001 kernel: x86/mm: Memory block size: 128MB Dec 16 03:50:33.295015 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 03:50:33.295029 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 16 03:50:33.295043 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 03:50:33.295056 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 03:50:33.295070 kernel: audit: initializing netlink subsys (disabled) Dec 16 03:50:33.295088 kernel: audit: type=2000 audit(1765857029.352:1): state=initialized audit_enabled=0 res=1 Dec 16 03:50:33.295102 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 03:50:33.295116 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 03:50:33.295130 kernel: cpuidle: using governor menu Dec 16 03:50:33.295143 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 03:50:33.295158 kernel: dca service started, version 1.12.1 Dec 16 03:50:33.295177 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 16 03:50:33.295197 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 16 03:50:33.295212 kernel: PCI: Using configuration type 1 for base access Dec 16 03:50:33.295225 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 03:50:33.295240 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 03:50:33.295254 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 03:50:33.295267 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 03:50:33.295281 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 03:50:33.295299 kernel: ACPI: Added _OSI(Module Device) Dec 16 03:50:33.295314 kernel: ACPI: Added _OSI(Processor Device) Dec 16 03:50:33.295328 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 03:50:33.295342 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 03:50:33.295355 kernel: ACPI: Interpreter enabled Dec 16 03:50:33.295369 kernel: ACPI: PM: (supports S0 S5) Dec 16 03:50:33.295383 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 03:50:33.295397 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 03:50:33.295415 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 03:50:33.295429 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 03:50:33.295443 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 03:50:33.295805 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 03:50:33.296062 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 03:50:33.296296 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 03:50:33.296325 kernel: PCI host bridge to bus 0000:00 Dec 16 03:50:33.296578 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 03:50:33.296811 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 03:50:33.297039 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 03:50:33.297247 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 16 03:50:33.297463 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 03:50:33.297679 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 16 03:50:33.298848 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 03:50:33.299142 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 03:50:33.299405 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Dec 16 03:50:33.299661 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Dec 16 03:50:33.299933 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Dec 16 03:50:33.300184 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Dec 16 03:50:33.300434 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 03:50:33.300695 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:50:33.300998 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Dec 16 03:50:33.301235 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 03:50:33.301460 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 16 03:50:33.301685 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 03:50:33.301988 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:50:33.302218 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Dec 16 03:50:33.302442 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 03:50:33.302683 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 03:50:33.302963 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 03:50:33.303212 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:50:33.303438 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Dec 16 03:50:33.303666 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 03:50:33.303949 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 03:50:33.304187 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 03:50:33.304424 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:50:33.304655 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Dec 16 03:50:33.304953 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 03:50:33.305182 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 03:50:33.305408 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 03:50:33.305667 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:50:33.305960 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Dec 16 03:50:33.306188 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 03:50:33.306413 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 03:50:33.306661 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 03:50:33.306920 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:50:33.307170 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Dec 16 03:50:33.307394 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 03:50:33.307618 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 03:50:33.307876 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 03:50:33.308129 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:50:33.308355 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Dec 16 03:50:33.308600 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 03:50:33.308952 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 03:50:33.309182 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 03:50:33.309424 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:50:33.309651 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Dec 16 03:50:33.309920 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 03:50:33.310160 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 03:50:33.310385 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 03:50:33.310631 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 03:50:33.310890 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Dec 16 03:50:33.311133 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Dec 16 03:50:33.311370 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Dec 16 03:50:33.311595 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Dec 16 03:50:33.311867 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 03:50:33.312110 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Dec 16 03:50:33.312343 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Dec 16 03:50:33.312568 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Dec 16 03:50:33.312856 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 03:50:33.313101 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 03:50:33.313386 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 03:50:33.313612 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Dec 16 03:50:33.313871 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Dec 16 03:50:33.314140 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 03:50:33.314376 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 16 03:50:33.314627 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Dec 16 03:50:33.314891 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Dec 16 03:50:33.315136 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 03:50:33.315367 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 03:50:33.315601 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 03:50:33.315858 kernel: pci_bus 0000:02: extended config space not accessible Dec 16 03:50:33.316125 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Dec 16 03:50:33.316389 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Dec 16 03:50:33.316632 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 03:50:33.316917 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 16 03:50:33.317166 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Dec 16 03:50:33.317396 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 03:50:33.317678 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 16 03:50:33.317944 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Dec 16 03:50:33.318173 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 03:50:33.318407 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 03:50:33.318633 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 03:50:33.318903 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 03:50:33.319152 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 03:50:33.319380 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 03:50:33.319410 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 03:50:33.319425 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 03:50:33.319439 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 03:50:33.319459 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 03:50:33.319474 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 03:50:33.319496 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 03:50:33.319517 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 03:50:33.319537 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 03:50:33.319551 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 03:50:33.319565 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 03:50:33.319579 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 03:50:33.319593 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 03:50:33.319607 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 03:50:33.319621 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 03:50:33.319639 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 03:50:33.319653 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 03:50:33.319667 kernel: iommu: Default domain type: Translated Dec 16 03:50:33.319681 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 03:50:33.319696 kernel: PCI: Using ACPI for IRQ routing Dec 16 03:50:33.319710 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 03:50:33.319741 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 03:50:33.319761 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 16 03:50:33.320002 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 03:50:33.320241 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 03:50:33.320474 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 03:50:33.320496 kernel: vgaarb: loaded Dec 16 03:50:33.320510 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 03:50:33.320524 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 03:50:33.320554 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 03:50:33.320568 kernel: pnp: PnP ACPI init Dec 16 03:50:33.320837 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 03:50:33.320861 kernel: pnp: PnP ACPI: found 5 devices Dec 16 03:50:33.320876 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 03:50:33.320890 kernel: NET: Registered PF_INET protocol family Dec 16 03:50:33.320905 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 03:50:33.320938 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 03:50:33.320954 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 03:50:33.320969 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 03:50:33.320983 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 03:50:33.320998 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 03:50:33.321012 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 03:50:33.321026 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 03:50:33.321046 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 03:50:33.321060 kernel: NET: Registered PF_XDP protocol family Dec 16 03:50:33.321302 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 16 03:50:33.321535 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 16 03:50:33.321778 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 16 03:50:33.322019 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 16 03:50:33.322253 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 16 03:50:33.322505 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 03:50:33.322781 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 03:50:33.323023 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 03:50:33.323250 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Dec 16 03:50:33.323479 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Dec 16 03:50:33.323709 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Dec 16 03:50:33.323972 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Dec 16 03:50:33.324199 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Dec 16 03:50:33.324435 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Dec 16 03:50:33.324652 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Dec 16 03:50:33.326207 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Dec 16 03:50:33.326448 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 03:50:33.327015 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 03:50:33.330022 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 03:50:33.330269 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 16 03:50:33.330501 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 16 03:50:33.330804 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 03:50:33.331053 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 03:50:33.331282 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 16 03:50:33.331553 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 03:50:33.331822 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 03:50:33.332094 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 03:50:33.332325 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 16 03:50:33.332570 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 03:50:33.332835 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 03:50:33.333078 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 03:50:33.333302 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 16 03:50:33.333537 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 03:50:33.333773 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 03:50:33.334030 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 03:50:33.334283 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 16 03:50:33.334511 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 03:50:33.336791 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 03:50:33.337059 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 03:50:33.337293 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 16 03:50:33.337542 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 03:50:33.337815 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 03:50:33.338062 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 03:50:33.338376 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 16 03:50:33.338719 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 03:50:33.339499 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 03:50:33.339775 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 03:50:33.340018 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 16 03:50:33.340254 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 03:50:33.341147 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 03:50:33.341376 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 03:50:33.341587 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 03:50:33.341853 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 03:50:33.342079 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 16 03:50:33.342299 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 03:50:33.342506 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 16 03:50:33.342751 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 16 03:50:33.342996 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 16 03:50:33.343228 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 03:50:33.343454 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 16 03:50:33.343687 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 16 03:50:33.343965 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 16 03:50:33.344182 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 03:50:33.344405 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 16 03:50:33.346178 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 16 03:50:33.346411 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 03:50:33.346649 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 16 03:50:33.350930 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 16 03:50:33.351165 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 03:50:33.351414 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 16 03:50:33.351631 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 16 03:50:33.351913 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 03:50:33.352161 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 16 03:50:33.352376 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 16 03:50:33.352588 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 03:50:33.355565 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 16 03:50:33.355888 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 16 03:50:33.356142 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 03:50:33.356379 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 16 03:50:33.356595 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 16 03:50:33.356836 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 03:50:33.356860 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 03:50:33.356876 kernel: PCI: CLS 0 bytes, default 64 Dec 16 03:50:33.356899 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 03:50:33.356914 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 16 03:50:33.356941 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 03:50:33.356958 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 16 03:50:33.356973 kernel: Initialise system trusted keyrings Dec 16 03:50:33.356988 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 03:50:33.357003 kernel: Key type asymmetric registered Dec 16 03:50:33.357023 kernel: Asymmetric key parser 'x509' registered Dec 16 03:50:33.357038 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 03:50:33.357052 kernel: io scheduler mq-deadline registered Dec 16 03:50:33.357067 kernel: io scheduler kyber registered Dec 16 03:50:33.357082 kernel: io scheduler bfq registered Dec 16 03:50:33.357316 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 16 03:50:33.357547 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 16 03:50:33.357800 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 03:50:33.358045 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 16 03:50:33.358272 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 16 03:50:33.358522 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 03:50:33.358769 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 16 03:50:33.359023 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 16 03:50:33.359261 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 03:50:33.359493 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 16 03:50:33.359718 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 16 03:50:33.360005 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 03:50:33.360252 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 16 03:50:33.360488 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 16 03:50:33.360712 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 03:50:33.360973 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 16 03:50:33.361200 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 16 03:50:33.361446 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 03:50:33.361685 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 16 03:50:33.361946 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 16 03:50:33.362174 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 03:50:33.362399 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 16 03:50:33.362632 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 16 03:50:33.362879 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 03:50:33.362909 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 03:50:33.362936 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 03:50:33.362953 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 03:50:33.362967 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 03:50:33.362988 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 03:50:33.363004 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 03:50:33.363019 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 03:50:33.363034 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 03:50:33.363265 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 03:50:33.363289 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 03:50:33.363519 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 03:50:33.363759 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T03:50:31 UTC (1765857031) Dec 16 03:50:33.363993 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 16 03:50:33.364016 kernel: intel_pstate: CPU model not supported Dec 16 03:50:33.364031 kernel: NET: Registered PF_INET6 protocol family Dec 16 03:50:33.364046 kernel: Segment Routing with IPv6 Dec 16 03:50:33.364060 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 03:50:33.364082 kernel: NET: Registered PF_PACKET protocol family Dec 16 03:50:33.364097 kernel: Key type dns_resolver registered Dec 16 03:50:33.364112 kernel: IPI shorthand broadcast: enabled Dec 16 03:50:33.364127 kernel: sched_clock: Marking stable (2243005405, 231417925)->(2599312703, -124889373) Dec 16 03:50:33.364141 kernel: registered taskstats version 1 Dec 16 03:50:33.364156 kernel: Loading compiled-in X.509 certificates Dec 16 03:50:33.364171 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: aafd1eb27ea805b8231c3bede9210239fae84df8' Dec 16 03:50:33.364191 kernel: Demotion targets for Node 0: null Dec 16 03:50:33.364217 kernel: Key type .fscrypt registered Dec 16 03:50:33.364231 kernel: Key type fscrypt-provisioning registered Dec 16 03:50:33.364244 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 03:50:33.364259 kernel: ima: Allocated hash algorithm: sha1 Dec 16 03:50:33.364273 kernel: ima: No architecture policies found Dec 16 03:50:33.364287 kernel: clk: Disabling unused clocks Dec 16 03:50:33.364302 kernel: Freeing unused kernel image (initmem) memory: 15556K Dec 16 03:50:33.364333 kernel: Write protecting the kernel read-only data: 47104k Dec 16 03:50:33.364348 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 16 03:50:33.364363 kernel: Run /init as init process Dec 16 03:50:33.364377 kernel: with arguments: Dec 16 03:50:33.364392 kernel: /init Dec 16 03:50:33.364406 kernel: with environment: Dec 16 03:50:33.364420 kernel: HOME=/ Dec 16 03:50:33.364439 kernel: TERM=linux Dec 16 03:50:33.364454 kernel: ACPI: bus type USB registered Dec 16 03:50:33.364468 kernel: usbcore: registered new interface driver usbfs Dec 16 03:50:33.364483 kernel: usbcore: registered new interface driver hub Dec 16 03:50:33.364498 kernel: usbcore: registered new device driver usb Dec 16 03:50:33.364728 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 16 03:50:33.364997 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 16 03:50:33.365237 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 03:50:33.365466 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 16 03:50:33.365695 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 16 03:50:33.365961 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 16 03:50:33.366253 kernel: hub 1-0:1.0: USB hub found Dec 16 03:50:33.366505 kernel: hub 1-0:1.0: 4 ports detected Dec 16 03:50:33.366808 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 03:50:33.367100 kernel: hub 2-0:1.0: USB hub found Dec 16 03:50:33.367347 kernel: hub 2-0:1.0: 4 ports detected Dec 16 03:50:33.367369 kernel: SCSI subsystem initialized Dec 16 03:50:33.367384 kernel: libata version 3.00 loaded. Dec 16 03:50:33.367643 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 03:50:33.367673 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 03:50:33.367918 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 03:50:33.368159 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 03:50:33.368392 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 03:50:33.368645 kernel: scsi host0: ahci Dec 16 03:50:33.368932 kernel: scsi host1: ahci Dec 16 03:50:33.369183 kernel: scsi host2: ahci Dec 16 03:50:33.369460 kernel: scsi host3: ahci Dec 16 03:50:33.369696 kernel: scsi host4: ahci Dec 16 03:50:33.369978 kernel: scsi host5: ahci Dec 16 03:50:33.370009 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Dec 16 03:50:33.370029 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Dec 16 03:50:33.370044 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Dec 16 03:50:33.370059 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Dec 16 03:50:33.370074 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Dec 16 03:50:33.370089 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Dec 16 03:50:33.370359 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 03:50:33.370389 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 03:50:33.370404 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 03:50:33.370418 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 03:50:33.370432 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 03:50:33.370446 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 03:50:33.370460 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 03:50:33.370479 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 03:50:33.370728 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 16 03:50:33.370771 kernel: usbcore: registered new interface driver usbhid Dec 16 03:50:33.371015 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 16 03:50:33.371037 kernel: usbhid: USB HID core driver Dec 16 03:50:33.371053 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 03:50:33.371075 kernel: GPT:25804799 != 125829119 Dec 16 03:50:33.371089 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 03:50:33.371104 kernel: GPT:25804799 != 125829119 Dec 16 03:50:33.371118 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 03:50:33.371132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 03:50:33.371147 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 16 03:50:33.371434 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 16 03:50:33.371462 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 03:50:33.371477 kernel: device-mapper: uevent: version 1.0.3 Dec 16 03:50:33.371492 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 03:50:33.371506 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 03:50:33.371521 kernel: raid6: sse2x4 gen() 12326 MB/s Dec 16 03:50:33.371535 kernel: raid6: sse2x2 gen() 8545 MB/s Dec 16 03:50:33.371549 kernel: raid6: sse2x1 gen() 8960 MB/s Dec 16 03:50:33.371568 kernel: raid6: using algorithm sse2x4 gen() 12326 MB/s Dec 16 03:50:33.371582 kernel: raid6: .... xor() 7450 MB/s, rmw enabled Dec 16 03:50:33.371597 kernel: raid6: using ssse3x2 recovery algorithm Dec 16 03:50:33.371611 kernel: xor: automatically using best checksumming function avx Dec 16 03:50:33.371626 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 03:50:33.371640 kernel: BTRFS: device fsid 57a8262f-2900-48ba-a17e-aafbd70d59c7 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (195) Dec 16 03:50:33.371655 kernel: BTRFS info (device dm-0): first mount of filesystem 57a8262f-2900-48ba-a17e-aafbd70d59c7 Dec 16 03:50:33.371673 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:50:33.371688 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 03:50:33.371715 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 03:50:33.371729 kernel: loop: module loaded Dec 16 03:50:33.371762 kernel: loop0: detected capacity change from 0 to 100528 Dec 16 03:50:33.371778 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 03:50:33.371795 systemd[1]: Successfully made /usr/ read-only. Dec 16 03:50:33.371821 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:50:33.371837 systemd[1]: Detected virtualization kvm. Dec 16 03:50:33.371852 systemd[1]: Detected architecture x86-64. Dec 16 03:50:33.371868 systemd[1]: Running in initrd. Dec 16 03:50:33.371883 systemd[1]: No hostname configured, using default hostname. Dec 16 03:50:33.371904 systemd[1]: Hostname set to . Dec 16 03:50:33.371931 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:50:33.371949 systemd[1]: Queued start job for default target initrd.target. Dec 16 03:50:33.371965 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:50:33.371981 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:50:33.371996 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:50:33.372013 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 03:50:33.372035 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:50:33.372051 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 03:50:33.372067 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 03:50:33.372083 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:50:33.372099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:50:33.372119 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:50:33.372135 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:50:33.372151 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:50:33.372166 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:50:33.372182 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:50:33.372197 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:50:33.372213 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:50:33.372233 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:50:33.372249 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 03:50:33.372265 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 03:50:33.372281 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:50:33.372296 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:50:33.372312 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:50:33.372327 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:50:33.372348 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 03:50:33.372364 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 03:50:33.372380 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:50:33.372396 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 03:50:33.372412 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 03:50:33.372428 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 03:50:33.372444 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:50:33.372464 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:50:33.372481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:50:33.372497 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 03:50:33.372517 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:50:33.372533 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 03:50:33.372609 systemd-journald[331]: Collecting audit messages is enabled. Dec 16 03:50:33.372644 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:50:33.372666 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 03:50:33.372681 kernel: Bridge firewalling registered Dec 16 03:50:33.372699 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:50:33.372727 systemd-journald[331]: Journal started Dec 16 03:50:33.372785 systemd-journald[331]: Runtime Journal (/run/log/journal/f65e21e24a8b4eb7adb0cfd60b4f36fc) is 4.7M, max 37.7M, 33M free. Dec 16 03:50:33.351333 systemd-modules-load[334]: Inserted module 'br_netfilter' Dec 16 03:50:33.429651 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:50:33.429687 kernel: audit: type=1130 audit(1765857033.421:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.435769 kernel: audit: type=1130 audit(1765857033.429:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.435553 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:50:33.442428 kernel: audit: type=1130 audit(1765857033.435:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.438433 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:50:33.449839 kernel: audit: type=1130 audit(1765857033.442:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.450008 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 03:50:33.451959 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:50:33.457911 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:50:33.464387 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:50:33.484341 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:50:33.492848 kernel: audit: type=1130 audit(1765857033.484:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.490461 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:50:33.499826 kernel: audit: type=1130 audit(1765857033.492:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.500314 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:50:33.504755 kernel: audit: type=1334 audit(1765857033.495:8): prog-id=6 op=LOAD Dec 16 03:50:33.495000 audit: BPF prog-id=6 op=LOAD Dec 16 03:50:33.505116 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:50:33.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.509905 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 03:50:33.515256 kernel: audit: type=1130 audit(1765857033.506:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.512961 systemd-tmpfiles[353]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 03:50:33.524505 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:50:33.533650 kernel: audit: type=1130 audit(1765857033.525:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.551043 dracut-cmdline[369]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:50:33.590375 systemd-resolved[368]: Positive Trust Anchors: Dec 16 03:50:33.590405 systemd-resolved[368]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:50:33.590412 systemd-resolved[368]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:50:33.590467 systemd-resolved[368]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:50:33.628599 systemd-resolved[368]: Defaulting to hostname 'linux'. Dec 16 03:50:33.631604 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:50:33.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.633992 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:50:33.697756 kernel: Loading iSCSI transport class v2.0-870. Dec 16 03:50:33.716792 kernel: iscsi: registered transport (tcp) Dec 16 03:50:33.744953 kernel: iscsi: registered transport (qla4xxx) Dec 16 03:50:33.745011 kernel: QLogic iSCSI HBA Driver Dec 16 03:50:33.781995 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:50:33.812020 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:50:33.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.813926 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:50:33.882191 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 03:50:33.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.885765 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 03:50:33.888900 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 03:50:33.930383 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:50:33.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.932000 audit: BPF prog-id=7 op=LOAD Dec 16 03:50:33.932000 audit: BPF prog-id=8 op=LOAD Dec 16 03:50:33.934272 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:50:33.972676 systemd-udevd[611]: Using default interface naming scheme 'v257'. Dec 16 03:50:33.990610 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:50:33.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:33.994317 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 03:50:34.031711 dracut-pre-trigger[679]: rd.md=0: removing MD RAID activation Dec 16 03:50:34.035112 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:50:34.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:34.038000 audit: BPF prog-id=9 op=LOAD Dec 16 03:50:34.040956 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:50:34.079537 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:50:34.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:34.083664 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:50:34.106202 systemd-networkd[720]: lo: Link UP Dec 16 03:50:34.106213 systemd-networkd[720]: lo: Gained carrier Dec 16 03:50:34.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:34.109941 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:50:34.110807 systemd[1]: Reached target network.target - Network. Dec 16 03:50:34.245818 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:50:34.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:34.250920 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 03:50:34.348658 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 03:50:34.380565 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 03:50:34.425652 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 03:50:34.427952 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 03:50:34.470324 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 03:50:34.474379 disk-uuid[775]: Primary Header is updated. Dec 16 03:50:34.474379 disk-uuid[775]: Secondary Entries is updated. Dec 16 03:50:34.474379 disk-uuid[775]: Secondary Header is updated. Dec 16 03:50:34.555763 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 03:50:34.583685 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:50:34.583927 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:50:34.598931 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 16 03:50:34.598966 kernel: audit: type=1131 audit(1765857034.584:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:34.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:34.585631 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:50:34.609928 kernel: AES CTR mode by8 optimization enabled Dec 16 03:50:34.599556 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:50:34.600440 systemd-networkd[720]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:50:34.600456 systemd-networkd[720]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:50:34.602568 systemd-networkd[720]: eth0: Link UP Dec 16 03:50:34.602932 systemd-networkd[720]: eth0: Gained carrier Dec 16 03:50:34.602947 systemd-networkd[720]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:50:34.619800 systemd-networkd[720]: eth0: DHCPv4 address 10.230.36.234/30, gateway 10.230.36.233 acquired from 10.230.36.233 Dec 16 03:50:34.664978 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 16 03:50:34.784490 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:50:34.791762 kernel: audit: type=1130 audit(1765857034.785:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:34.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:34.805937 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 03:50:34.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:34.808995 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:50:34.813840 kernel: audit: type=1130 audit(1765857034.806:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:34.814576 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:50:34.815383 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:50:34.818417 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 03:50:34.848537 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:50:34.855278 kernel: audit: type=1130 audit(1765857034.848:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:34.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:35.547749 disk-uuid[776]: Warning: The kernel is still using the old partition table. Dec 16 03:50:35.547749 disk-uuid[776]: The new table will be used at the next reboot or after you Dec 16 03:50:35.547749 disk-uuid[776]: run partprobe(8) or kpartx(8) Dec 16 03:50:35.547749 disk-uuid[776]: The operation has completed successfully. Dec 16 03:50:35.557493 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 03:50:35.569155 kernel: audit: type=1130 audit(1765857035.557:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:35.569191 kernel: audit: type=1131 audit(1765857035.557:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:35.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:35.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:35.557670 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 03:50:35.560540 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 03:50:35.601745 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (860) Dec 16 03:50:35.606886 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:50:35.606924 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:50:35.613165 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:50:35.613225 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:50:35.622767 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:50:35.623773 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 03:50:35.630316 kernel: audit: type=1130 audit(1765857035.623:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:35.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:35.627944 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 03:50:35.638955 systemd-networkd[720]: eth0: Gained IPv6LL Dec 16 03:50:35.823413 ignition[879]: Ignition 2.24.0 Dec 16 03:50:35.823444 ignition[879]: Stage: fetch-offline Dec 16 03:50:35.823595 ignition[879]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:50:35.823619 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 03:50:35.826607 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:50:35.834069 kernel: audit: type=1130 audit(1765857035.827:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:35.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:35.823895 ignition[879]: parsed url from cmdline: "" Dec 16 03:50:35.831935 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 03:50:35.823903 ignition[879]: no config URL provided Dec 16 03:50:35.824065 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:50:35.824101 ignition[879]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:50:35.824112 ignition[879]: failed to fetch config: resource requires networking Dec 16 03:50:35.824510 ignition[879]: Ignition finished successfully Dec 16 03:50:35.865577 ignition[885]: Ignition 2.24.0 Dec 16 03:50:35.865600 ignition[885]: Stage: fetch Dec 16 03:50:35.865908 ignition[885]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:50:35.865928 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 03:50:35.866065 ignition[885]: parsed url from cmdline: "" Dec 16 03:50:35.866073 ignition[885]: no config URL provided Dec 16 03:50:35.866083 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:50:35.866097 ignition[885]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:50:35.867093 ignition[885]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 16 03:50:35.868622 ignition[885]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 16 03:50:35.868666 ignition[885]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 16 03:50:35.883983 ignition[885]: GET result: OK Dec 16 03:50:35.884185 ignition[885]: parsing config with SHA512: c497171dae4e869408f88cb386990825e25c993a581df5822d4cfd6035c90095ad6253311433f144a018b657dd414a7141079cc44da2ec57a12ae4a96ff85327 Dec 16 03:50:35.894243 unknown[885]: fetched base config from "system" Dec 16 03:50:35.894266 unknown[885]: fetched base config from "system" Dec 16 03:50:35.894740 ignition[885]: fetch: fetch complete Dec 16 03:50:35.894277 unknown[885]: fetched user config from "openstack" Dec 16 03:50:35.894749 ignition[885]: fetch: fetch passed Dec 16 03:50:35.897411 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 03:50:35.904292 kernel: audit: type=1130 audit(1765857035.898:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:35.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:35.894829 ignition[885]: Ignition finished successfully Dec 16 03:50:35.901931 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 03:50:35.938915 ignition[891]: Ignition 2.24.0 Dec 16 03:50:35.938938 ignition[891]: Stage: kargs Dec 16 03:50:35.939185 ignition[891]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:50:35.939202 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 03:50:35.944003 ignition[891]: kargs: kargs passed Dec 16 03:50:35.944695 ignition[891]: Ignition finished successfully Dec 16 03:50:35.946899 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 03:50:35.953303 kernel: audit: type=1130 audit(1765857035.946:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:35.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:35.949936 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 03:50:35.983305 ignition[897]: Ignition 2.24.0 Dec 16 03:50:35.983327 ignition[897]: Stage: disks Dec 16 03:50:35.983575 ignition[897]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:50:35.983594 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 03:50:35.984916 ignition[897]: disks: disks passed Dec 16 03:50:35.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:35.986497 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 03:50:35.984990 ignition[897]: Ignition finished successfully Dec 16 03:50:35.988275 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 03:50:35.989337 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 03:50:35.990927 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:50:35.992266 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:50:35.993779 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:50:35.996939 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 03:50:36.039883 systemd-fsck[905]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 03:50:36.043805 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 03:50:36.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:36.048163 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 03:50:36.199841 kernel: EXT4-fs (vda9): mounted filesystem 1314c107-11a5-486b-9d52-be9f57b6bf1b r/w with ordered data mode. Quota mode: none. Dec 16 03:50:36.199500 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 03:50:36.201724 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 03:50:36.205504 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:50:36.208238 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 03:50:36.211012 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 03:50:36.219050 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 16 03:50:36.220874 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 03:50:36.220918 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:50:36.225624 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 03:50:36.229948 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 03:50:36.232021 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (913) Dec 16 03:50:36.236809 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:50:36.239760 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:50:36.248389 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:50:36.248434 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:50:36.254670 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:50:36.327766 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 03:50:36.486261 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 03:50:36.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:36.488972 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 03:50:36.490925 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 03:50:36.511772 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:50:36.532721 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 03:50:36.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:36.548675 ignition[1016]: INFO : Ignition 2.24.0 Dec 16 03:50:36.548675 ignition[1016]: INFO : Stage: mount Dec 16 03:50:36.551354 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:50:36.551354 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 03:50:36.551354 ignition[1016]: INFO : mount: mount passed Dec 16 03:50:36.551354 ignition[1016]: INFO : Ignition finished successfully Dec 16 03:50:36.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:36.552595 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 03:50:36.587350 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 03:50:37.146404 systemd-networkd[720]: eth0: Ignoring DHCPv6 address 2a02:1348:179:893a:24:19ff:fee6:24ea/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:893a:24:19ff:fee6:24ea/64 assigned by NDisc. Dec 16 03:50:37.146424 systemd-networkd[720]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 16 03:50:37.364767 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 03:50:39.376848 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 03:50:43.387757 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 03:50:43.393844 coreos-metadata[915]: Dec 16 03:50:43.393 WARN failed to locate config-drive, using the metadata service API instead Dec 16 03:50:43.419778 coreos-metadata[915]: Dec 16 03:50:43.419 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 03:50:43.434279 coreos-metadata[915]: Dec 16 03:50:43.434 INFO Fetch successful Dec 16 03:50:43.435376 coreos-metadata[915]: Dec 16 03:50:43.435 INFO wrote hostname srv-n64tt.gb1.brightbox.com to /sysroot/etc/hostname Dec 16 03:50:43.437472 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 16 03:50:43.452730 kernel: kauditd_printk_skb: 5 callbacks suppressed Dec 16 03:50:43.452770 kernel: audit: type=1130 audit(1765857043.439:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:43.452803 kernel: audit: type=1131 audit(1765857043.439:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:43.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:43.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:43.437749 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 16 03:50:43.442839 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 03:50:43.466596 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:50:43.506740 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1031) Dec 16 03:50:43.506794 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:50:43.509954 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:50:43.514955 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:50:43.514997 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:50:43.519172 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:50:43.555924 ignition[1048]: INFO : Ignition 2.24.0 Dec 16 03:50:43.555924 ignition[1048]: INFO : Stage: files Dec 16 03:50:43.557823 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:50:43.557823 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 03:50:43.557823 ignition[1048]: DEBUG : files: compiled without relabeling support, skipping Dec 16 03:50:43.560465 ignition[1048]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 03:50:43.560465 ignition[1048]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 03:50:43.564176 ignition[1048]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 03:50:43.565508 ignition[1048]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 03:50:43.566479 ignition[1048]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 03:50:43.565625 unknown[1048]: wrote ssh authorized keys file for user: core Dec 16 03:50:43.569678 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 03:50:43.569678 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 03:50:43.762307 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 03:50:44.052176 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 03:50:44.052176 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 03:50:44.055225 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 03:50:44.055225 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:50:44.055225 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:50:44.055225 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:50:44.055225 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:50:44.055225 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:50:44.055225 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:50:44.055225 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:50:44.064292 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:50:44.064292 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:50:44.064292 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:50:44.064292 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:50:44.064292 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 03:50:44.759661 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 03:50:49.512888 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:50:49.512888 ignition[1048]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 03:50:49.518190 ignition[1048]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:50:49.520072 ignition[1048]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:50:49.520072 ignition[1048]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 03:50:49.522341 ignition[1048]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 03:50:49.522341 ignition[1048]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 03:50:49.522341 ignition[1048]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:50:49.522341 ignition[1048]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:50:49.522341 ignition[1048]: INFO : files: files passed Dec 16 03:50:49.522341 ignition[1048]: INFO : Ignition finished successfully Dec 16 03:50:49.535726 kernel: audit: type=1130 audit(1765857049.525:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.523311 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 03:50:49.530928 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 03:50:49.535985 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 03:50:49.548394 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 03:50:49.548597 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 03:50:49.562597 kernel: audit: type=1130 audit(1765857049.549:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.562655 kernel: audit: type=1131 audit(1765857049.549:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.568596 initrd-setup-root-after-ignition[1081]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:50:49.568596 initrd-setup-root-after-ignition[1081]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:50:49.571790 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:50:49.573678 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:50:49.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.580802 kernel: audit: type=1130 audit(1765857049.574:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.580977 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 03:50:49.584013 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 03:50:49.641631 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 03:50:49.641863 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 03:50:49.654185 kernel: audit: type=1130 audit(1765857049.642:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.654221 kernel: audit: type=1131 audit(1765857049.642:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.643676 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 03:50:49.654895 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 03:50:49.656702 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 03:50:49.658914 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 03:50:49.692567 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:50:49.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.695918 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 03:50:49.702092 kernel: audit: type=1130 audit(1765857049.692:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.723629 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:50:49.725011 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:50:49.726867 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:50:49.728016 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 03:50:49.729558 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 03:50:49.736726 kernel: audit: type=1131 audit(1765857049.730:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.729843 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:50:49.736647 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 03:50:49.737676 systemd[1]: Stopped target basic.target - Basic System. Dec 16 03:50:49.739283 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 03:50:49.740760 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:50:49.742276 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 03:50:49.743809 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:50:49.745365 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 03:50:49.746956 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:50:49.748464 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 03:50:49.750115 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 03:50:49.751617 systemd[1]: Stopped target swap.target - Swaps. Dec 16 03:50:49.759945 kernel: audit: type=1131 audit(1765857049.753:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.753031 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 03:50:49.753294 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:50:49.759897 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:50:49.760904 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:50:49.771247 kernel: audit: type=1131 audit(1765857049.764:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.762379 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 03:50:49.762622 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:50:49.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.763889 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 03:50:49.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.764070 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 03:50:49.771107 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 03:50:49.771400 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:50:49.773239 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 03:50:49.773430 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 03:50:49.777024 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 03:50:49.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.777818 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 03:50:49.779915 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:50:49.783023 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 03:50:49.784746 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 03:50:49.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.786005 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:50:49.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.789087 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 03:50:49.789288 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:50:49.790556 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 03:50:49.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.792732 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:50:49.805735 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 03:50:49.805900 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 03:50:49.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.820183 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 03:50:49.825447 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 03:50:49.826805 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 03:50:49.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.831067 ignition[1105]: INFO : Ignition 2.24.0 Dec 16 03:50:49.831067 ignition[1105]: INFO : Stage: umount Dec 16 03:50:49.833586 ignition[1105]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:50:49.833586 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 03:50:49.833586 ignition[1105]: INFO : umount: umount passed Dec 16 03:50:49.833586 ignition[1105]: INFO : Ignition finished successfully Dec 16 03:50:49.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.833844 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 03:50:49.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.834020 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 03:50:49.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.836016 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 03:50:49.836125 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 03:50:49.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.837591 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 03:50:49.837663 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 03:50:49.838984 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 03:50:49.839060 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 03:50:49.840337 systemd[1]: Stopped target network.target - Network. Dec 16 03:50:49.841572 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 03:50:49.841648 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:50:49.843070 systemd[1]: Stopped target paths.target - Path Units. Dec 16 03:50:49.844289 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 03:50:49.847793 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:50:49.848773 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 03:50:49.850290 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 03:50:49.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.851840 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 03:50:49.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.851922 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:50:49.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.853132 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 03:50:49.853207 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:50:49.854640 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 03:50:49.854689 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:50:49.856329 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 03:50:49.856439 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 03:50:49.857758 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 03:50:49.857839 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 03:50:49.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.878000 audit: BPF prog-id=9 op=UNLOAD Dec 16 03:50:49.859097 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 03:50:49.859176 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 03:50:49.860609 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 03:50:49.862369 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 03:50:49.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.872562 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 03:50:49.872798 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 03:50:49.880778 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 03:50:49.885000 audit: BPF prog-id=6 op=UNLOAD Dec 16 03:50:49.880964 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 03:50:49.884415 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 03:50:49.885614 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 03:50:49.885697 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:50:49.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.888127 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 03:50:49.890075 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 03:50:49.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.890155 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:50:49.892197 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 03:50:49.892273 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:50:49.893000 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 03:50:49.893068 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 03:50:49.896490 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:50:49.906170 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 03:50:49.907177 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:50:49.909151 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 03:50:49.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.909220 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 03:50:49.911458 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 03:50:49.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.911535 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:50:49.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.912306 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 03:50:49.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.912387 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:50:49.915087 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 03:50:49.915168 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 03:50:49.916645 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 03:50:49.916738 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:50:49.920928 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 03:50:49.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.922339 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 03:50:49.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.922416 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:50:49.924160 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 03:50:49.924230 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:50:49.926904 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:50:49.926988 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:50:49.944591 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 03:50:49.944812 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 03:50:49.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.953065 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 03:50:49.953233 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 03:50:49.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:49.955248 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 03:50:49.958695 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 03:50:49.984642 systemd[1]: Switching root. Dec 16 03:50:50.024367 systemd-journald[331]: Journal stopped Dec 16 03:50:51.612689 systemd-journald[331]: Received SIGTERM from PID 1 (systemd). Dec 16 03:50:51.612813 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 03:50:51.612860 kernel: SELinux: policy capability open_perms=1 Dec 16 03:50:51.612884 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 03:50:51.612904 kernel: SELinux: policy capability always_check_network=0 Dec 16 03:50:51.612924 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 03:50:51.612945 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 03:50:51.612973 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 03:50:51.613000 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 03:50:51.613049 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 03:50:51.613082 systemd[1]: Successfully loaded SELinux policy in 78.962ms. Dec 16 03:50:51.613115 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.330ms. Dec 16 03:50:51.613151 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:50:51.613174 systemd[1]: Detected virtualization kvm. Dec 16 03:50:51.613198 systemd[1]: Detected architecture x86-64. Dec 16 03:50:51.613233 systemd[1]: Detected first boot. Dec 16 03:50:51.613257 systemd[1]: Hostname set to . Dec 16 03:50:51.613299 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:50:51.613322 zram_generator::config[1149]: No configuration found. Dec 16 03:50:51.613353 kernel: Guest personality initialized and is inactive Dec 16 03:50:51.613374 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 03:50:51.613402 kernel: Initialized host personality Dec 16 03:50:51.613436 kernel: NET: Registered PF_VSOCK protocol family Dec 16 03:50:51.613479 systemd[1]: Populated /etc with preset unit settings. Dec 16 03:50:51.613504 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 03:50:51.613526 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 03:50:51.613548 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 03:50:51.613585 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 03:50:51.613610 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 03:50:51.613646 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 03:50:51.613669 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 03:50:51.613693 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 03:50:51.619756 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 03:50:51.619794 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 03:50:51.619821 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 03:50:51.619844 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:50:51.619886 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:50:51.619911 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 03:50:51.619934 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 03:50:51.619958 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 03:50:51.619981 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:50:51.620017 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 03:50:51.620041 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:50:51.620064 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:50:51.620094 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 03:50:51.620117 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 03:50:51.620139 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 03:50:51.620161 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 03:50:51.620195 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:50:51.620219 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:50:51.620241 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 03:50:51.620272 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:50:51.620294 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:50:51.620316 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 03:50:51.620345 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 03:50:51.620378 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 03:50:51.620403 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:50:51.620426 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 03:50:51.620467 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:50:51.620491 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 03:50:51.620514 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 03:50:51.620536 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:50:51.620572 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:50:51.620596 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 03:50:51.620619 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 03:50:51.620642 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 03:50:51.620664 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 03:50:51.620687 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:50:51.620710 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 03:50:51.620762 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 03:50:51.620786 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 03:50:51.620810 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 03:50:51.620832 systemd[1]: Reached target machines.target - Containers. Dec 16 03:50:51.620854 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 03:50:51.620876 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:50:51.620899 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:50:51.620935 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 03:50:51.620975 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:50:51.620998 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:50:51.621026 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:50:51.621048 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 03:50:51.621070 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:50:51.621093 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 03:50:51.621128 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 03:50:51.621153 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 03:50:51.621181 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 03:50:51.621204 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 03:50:51.621239 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:50:51.621263 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:50:51.621286 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:50:51.621309 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:50:51.621331 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 03:50:51.621353 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 03:50:51.621387 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:50:51.621411 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:50:51.621445 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 03:50:51.621486 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 03:50:51.621516 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 03:50:51.621538 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 03:50:51.621561 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 03:50:51.621597 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 03:50:51.621621 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:50:51.621644 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 03:50:51.621667 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 03:50:51.621691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:50:51.623753 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:50:51.623785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:50:51.623810 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:50:51.623848 kernel: ACPI: bus type drm_connector registered Dec 16 03:50:51.623874 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:50:51.623897 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:50:51.623937 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:50:51.623962 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:50:51.623985 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 03:50:51.624044 systemd-journald[1237]: Collecting audit messages is enabled. Dec 16 03:50:51.624085 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:50:51.624109 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 03:50:51.624148 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:50:51.624173 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:50:51.624209 kernel: fuse: init (API version 7.41) Dec 16 03:50:51.624233 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 03:50:51.624256 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 03:50:51.624279 systemd-journald[1237]: Journal started Dec 16 03:50:51.624324 systemd-journald[1237]: Runtime Journal (/run/log/journal/f65e21e24a8b4eb7adb0cfd60b4f36fc) is 4.7M, max 37.7M, 33M free. Dec 16 03:50:51.269000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 03:50:51.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.439000 audit: BPF prog-id=14 op=UNLOAD Dec 16 03:50:51.439000 audit: BPF prog-id=13 op=UNLOAD Dec 16 03:50:51.442000 audit: BPF prog-id=15 op=LOAD Dec 16 03:50:51.443000 audit: BPF prog-id=16 op=LOAD Dec 16 03:50:51.443000 audit: BPF prog-id=17 op=LOAD Dec 16 03:50:51.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.604000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 03:50:51.604000 audit[1237]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff1ed726f0 a2=4000 a3=0 items=0 ppid=1 pid=1237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:50:51.604000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 03:50:51.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.627798 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:50:51.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.154889 systemd[1]: Queued start job for default target multi-user.target. Dec 16 03:50:51.170148 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 03:50:51.170984 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 03:50:51.634791 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 03:50:51.639180 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:50:51.639267 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:50:51.644741 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 03:50:51.650746 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:50:51.654748 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 03:50:51.658735 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:50:51.663747 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:50:51.676750 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 03:50:51.676835 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:50:51.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.682828 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 03:50:51.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.684033 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 03:50:51.684351 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 03:50:51.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.692695 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 03:50:51.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.705854 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 03:50:51.710998 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 03:50:51.714967 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 03:50:51.722993 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 03:50:51.730737 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:50:51.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.741736 kernel: loop1: detected capacity change from 0 to 50784 Dec 16 03:50:51.768590 systemd-journald[1237]: Time spent on flushing to /var/log/journal/f65e21e24a8b4eb7adb0cfd60b4f36fc is 26.466ms for 1296 entries. Dec 16 03:50:51.768590 systemd-journald[1237]: System Journal (/var/log/journal/f65e21e24a8b4eb7adb0cfd60b4f36fc) is 8M, max 588.1M, 580.1M free. Dec 16 03:50:51.807303 systemd-journald[1237]: Received client request to flush runtime journal. Dec 16 03:50:51.807362 kernel: loop2: detected capacity change from 0 to 229808 Dec 16 03:50:51.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.781529 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 03:50:51.809711 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 03:50:51.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.833896 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 03:50:51.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.838000 audit: BPF prog-id=18 op=LOAD Dec 16 03:50:51.838000 audit: BPF prog-id=19 op=LOAD Dec 16 03:50:51.838000 audit: BPF prog-id=20 op=LOAD Dec 16 03:50:51.842745 kernel: loop3: detected capacity change from 0 to 8 Dec 16 03:50:51.844976 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 03:50:51.847000 audit: BPF prog-id=21 op=LOAD Dec 16 03:50:51.849972 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:50:51.855374 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:50:51.867760 kernel: loop4: detected capacity change from 0 to 111560 Dec 16 03:50:51.870000 audit: BPF prog-id=22 op=LOAD Dec 16 03:50:51.871000 audit: BPF prog-id=23 op=LOAD Dec 16 03:50:51.871000 audit: BPF prog-id=24 op=LOAD Dec 16 03:50:51.876000 audit: BPF prog-id=25 op=LOAD Dec 16 03:50:51.873925 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 03:50:51.880000 audit: BPF prog-id=26 op=LOAD Dec 16 03:50:51.880000 audit: BPF prog-id=27 op=LOAD Dec 16 03:50:51.883983 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 03:50:51.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.913824 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:50:51.924805 kernel: loop5: detected capacity change from 0 to 50784 Dec 16 03:50:51.952750 kernel: loop6: detected capacity change from 0 to 229808 Dec 16 03:50:51.959706 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Dec 16 03:50:51.959769 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Dec 16 03:50:51.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.977428 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:50:51.981735 kernel: loop7: detected capacity change from 0 to 8 Dec 16 03:50:51.990746 kernel: loop1: detected capacity change from 0 to 111560 Dec 16 03:50:51.997542 systemd-nsresourced[1306]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 03:50:51.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:51.999614 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 03:50:52.008994 (sd-merge)[1310]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Dec 16 03:50:52.011615 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 03:50:52.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:52.020332 (sd-merge)[1310]: Merged extensions into '/usr'. Dec 16 03:50:52.033961 systemd[1]: Reload requested from client PID 1267 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 03:50:52.034000 systemd[1]: Reloading... Dec 16 03:50:52.177677 systemd-oomd[1301]: No swap; memory pressure usage will be degraded Dec 16 03:50:52.186931 zram_generator::config[1351]: No configuration found. Dec 16 03:50:52.250693 systemd-resolved[1303]: Positive Trust Anchors: Dec 16 03:50:52.250729 systemd-resolved[1303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:50:52.250741 systemd-resolved[1303]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:50:52.250785 systemd-resolved[1303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:50:52.282030 systemd-resolved[1303]: Using system hostname 'srv-n64tt.gb1.brightbox.com'. Dec 16 03:50:52.544325 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 03:50:52.544614 systemd[1]: Reloading finished in 509 ms. Dec 16 03:50:52.568269 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 03:50:52.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:52.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:52.569466 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:50:52.570677 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 03:50:52.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:52.575512 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:50:52.578505 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 03:50:52.588873 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 03:50:52.598925 systemd[1]: Starting ensure-sysext.service... Dec 16 03:50:52.603012 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:50:52.611000 audit: BPF prog-id=28 op=LOAD Dec 16 03:50:52.612000 audit: BPF prog-id=18 op=UNLOAD Dec 16 03:50:52.612000 audit: BPF prog-id=29 op=LOAD Dec 16 03:50:52.612000 audit: BPF prog-id=30 op=LOAD Dec 16 03:50:52.612000 audit: BPF prog-id=19 op=UNLOAD Dec 16 03:50:52.613000 audit: BPF prog-id=20 op=UNLOAD Dec 16 03:50:52.615000 audit: BPF prog-id=31 op=LOAD Dec 16 03:50:52.615000 audit: BPF prog-id=25 op=UNLOAD Dec 16 03:50:52.615000 audit: BPF prog-id=32 op=LOAD Dec 16 03:50:52.615000 audit: BPF prog-id=33 op=LOAD Dec 16 03:50:52.615000 audit: BPF prog-id=26 op=UNLOAD Dec 16 03:50:52.616000 audit: BPF prog-id=27 op=UNLOAD Dec 16 03:50:52.620000 audit: BPF prog-id=34 op=LOAD Dec 16 03:50:52.621000 audit: BPF prog-id=15 op=UNLOAD Dec 16 03:50:52.621000 audit: BPF prog-id=35 op=LOAD Dec 16 03:50:52.622000 audit: BPF prog-id=36 op=LOAD Dec 16 03:50:52.622000 audit: BPF prog-id=16 op=UNLOAD Dec 16 03:50:52.622000 audit: BPF prog-id=17 op=UNLOAD Dec 16 03:50:52.623000 audit: BPF prog-id=37 op=LOAD Dec 16 03:50:52.625000 audit: BPF prog-id=21 op=UNLOAD Dec 16 03:50:52.626000 audit: BPF prog-id=38 op=LOAD Dec 16 03:50:52.626000 audit: BPF prog-id=22 op=UNLOAD Dec 16 03:50:52.627000 audit: BPF prog-id=39 op=LOAD Dec 16 03:50:52.627000 audit: BPF prog-id=40 op=LOAD Dec 16 03:50:52.627000 audit: BPF prog-id=23 op=UNLOAD Dec 16 03:50:52.627000 audit: BPF prog-id=24 op=UNLOAD Dec 16 03:50:52.631969 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 03:50:52.633042 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 03:50:52.653019 systemd[1]: Reload requested from client PID 1410 ('systemctl') (unit ensure-sysext.service)... Dec 16 03:50:52.653049 systemd[1]: Reloading... Dec 16 03:50:52.672062 systemd-tmpfiles[1411]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 03:50:52.673060 systemd-tmpfiles[1411]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 03:50:52.674195 systemd-tmpfiles[1411]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 03:50:52.678910 systemd-tmpfiles[1411]: ACLs are not supported, ignoring. Dec 16 03:50:52.679121 systemd-tmpfiles[1411]: ACLs are not supported, ignoring. Dec 16 03:50:52.696366 systemd-tmpfiles[1411]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:50:52.696387 systemd-tmpfiles[1411]: Skipping /boot Dec 16 03:50:52.719534 systemd-tmpfiles[1411]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:50:52.719685 systemd-tmpfiles[1411]: Skipping /boot Dec 16 03:50:52.765751 zram_generator::config[1445]: No configuration found. Dec 16 03:50:53.041486 systemd[1]: Reloading finished in 387 ms. Dec 16 03:50:53.059209 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 03:50:53.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:53.061000 audit: BPF prog-id=41 op=LOAD Dec 16 03:50:53.061000 audit: BPF prog-id=38 op=UNLOAD Dec 16 03:50:53.062000 audit: BPF prog-id=42 op=LOAD Dec 16 03:50:53.062000 audit: BPF prog-id=43 op=LOAD Dec 16 03:50:53.062000 audit: BPF prog-id=39 op=UNLOAD Dec 16 03:50:53.062000 audit: BPF prog-id=40 op=UNLOAD Dec 16 03:50:53.063000 audit: BPF prog-id=44 op=LOAD Dec 16 03:50:53.063000 audit: BPF prog-id=34 op=UNLOAD Dec 16 03:50:53.064000 audit: BPF prog-id=45 op=LOAD Dec 16 03:50:53.064000 audit: BPF prog-id=46 op=LOAD Dec 16 03:50:53.064000 audit: BPF prog-id=35 op=UNLOAD Dec 16 03:50:53.064000 audit: BPF prog-id=36 op=UNLOAD Dec 16 03:50:53.066000 audit: BPF prog-id=47 op=LOAD Dec 16 03:50:53.066000 audit: BPF prog-id=31 op=UNLOAD Dec 16 03:50:53.066000 audit: BPF prog-id=48 op=LOAD Dec 16 03:50:53.066000 audit: BPF prog-id=49 op=LOAD Dec 16 03:50:53.066000 audit: BPF prog-id=32 op=UNLOAD Dec 16 03:50:53.066000 audit: BPF prog-id=33 op=UNLOAD Dec 16 03:50:53.067000 audit: BPF prog-id=50 op=LOAD Dec 16 03:50:53.067000 audit: BPF prog-id=37 op=UNLOAD Dec 16 03:50:53.071000 audit: BPF prog-id=51 op=LOAD Dec 16 03:50:53.071000 audit: BPF prog-id=28 op=UNLOAD Dec 16 03:50:53.071000 audit: BPF prog-id=52 op=LOAD Dec 16 03:50:53.071000 audit: BPF prog-id=53 op=LOAD Dec 16 03:50:53.071000 audit: BPF prog-id=29 op=UNLOAD Dec 16 03:50:53.071000 audit: BPF prog-id=30 op=UNLOAD Dec 16 03:50:53.076338 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:50:53.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:53.088699 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:50:53.097171 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 03:50:53.099663 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 03:50:53.105000 audit: BPF prog-id=8 op=UNLOAD Dec 16 03:50:53.105000 audit: BPF prog-id=7 op=UNLOAD Dec 16 03:50:53.105186 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 03:50:53.106000 audit: BPF prog-id=54 op=LOAD Dec 16 03:50:53.106000 audit: BPF prog-id=55 op=LOAD Dec 16 03:50:53.112076 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:50:53.121824 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 03:50:53.129124 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:50:53.129433 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:50:53.136320 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:50:53.161644 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:50:53.165130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:50:53.166182 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:50:53.166682 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:50:53.167077 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:50:53.167330 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:50:53.177577 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:50:53.179124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:50:53.180009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:50:53.180238 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:50:53.180382 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:50:53.180531 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:50:53.188519 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:50:53.189627 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:50:53.200000 audit[1509]: SYSTEM_BOOT pid=1509 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 03:50:53.212876 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:50:53.214916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:50:53.215170 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:50:53.215318 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:50:53.215523 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:50:53.217685 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:50:53.223284 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:50:53.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:53.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:53.243338 systemd-udevd[1507]: Using default interface naming scheme 'v257'. Dec 16 03:50:53.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:53.249108 systemd[1]: Finished ensure-sysext.service. Dec 16 03:50:53.251661 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 03:50:53.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:53.257000 audit: BPF prog-id=56 op=LOAD Dec 16 03:50:53.262321 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 03:50:53.268694 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:50:53.269669 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:50:53.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:53.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:53.276084 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:50:53.278126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:50:53.279359 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:50:53.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:53.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:53.281436 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:50:53.291942 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:50:53.292258 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:50:53.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:53.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:53.307994 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 03:50:53.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:53.328405 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:50:53.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:50:53.330000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 03:50:53.330000 audit[1544]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcc60682f0 a2=420 a3=0 items=0 ppid=1503 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:50:53.330000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:50:53.332445 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:50:53.334034 augenrules[1544]: No rules Dec 16 03:50:53.332824 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:50:53.344128 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:50:53.384904 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 03:50:53.386354 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 03:50:53.477707 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 03:50:53.480075 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 03:50:53.596038 systemd-networkd[1555]: lo: Link UP Dec 16 03:50:53.596105 systemd-networkd[1555]: lo: Gained carrier Dec 16 03:50:53.606824 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:50:53.609036 systemd[1]: Reached target network.target - Network. Dec 16 03:50:53.614709 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 03:50:53.619022 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 03:50:53.619203 systemd-networkd[1555]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:50:53.619216 systemd-networkd[1555]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:50:53.622129 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 03:50:53.628212 systemd-networkd[1555]: eth0: Link UP Dec 16 03:50:53.628546 systemd-networkd[1555]: eth0: Gained carrier Dec 16 03:50:53.628580 systemd-networkd[1555]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:50:53.653832 systemd-networkd[1555]: eth0: DHCPv4 address 10.230.36.234/30, gateway 10.230.36.233 acquired from 10.230.36.233 Dec 16 03:50:53.656209 systemd-timesyncd[1531]: Network configuration changed, trying to establish connection. Dec 16 03:50:53.689254 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 03:50:53.800767 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 03:50:53.856765 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 03:50:53.885769 kernel: ACPI: button: Power Button [PWRF] Dec 16 03:50:53.912076 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 03:50:53.925987 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 03:50:53.964742 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 03:50:53.983608 ldconfig[1505]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 03:50:53.987748 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 03:50:53.995755 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 03:50:53.994852 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 03:50:53.999957 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 03:50:54.027804 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 03:50:54.030260 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:50:54.032433 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 03:50:54.033867 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 03:50:54.034634 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 03:50:54.035625 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 03:50:54.039292 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 03:50:54.040180 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 03:50:54.042045 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 03:50:54.043795 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 03:50:54.044565 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 03:50:54.044608 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:50:54.045538 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:50:54.048785 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 03:50:54.051644 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 03:50:54.057557 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 03:50:54.059974 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 03:50:54.061265 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 03:50:54.071969 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 03:50:54.073100 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 03:50:54.074679 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 03:50:54.076363 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:50:54.077058 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:50:54.077769 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:50:54.077819 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:50:54.079860 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 03:50:54.083673 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 03:50:54.087960 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 03:50:54.093498 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 03:50:54.099997 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 03:50:54.105273 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 03:50:54.107808 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 03:50:54.112115 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 03:50:54.119950 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 03:50:54.124263 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 03:50:54.131992 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 03:50:54.136113 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 03:50:54.139989 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 03:50:54.150923 jq[1605]: false Dec 16 03:50:54.152562 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 03:50:54.153881 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 03:50:54.154592 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 03:50:54.158796 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 03:50:54.170910 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 03:50:54.186884 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 03:50:54.193736 google_oslogin_nss_cache[1607]: oslogin_cache_refresh[1607]: Refreshing passwd entry cache Dec 16 03:50:54.191826 oslogin_cache_refresh[1607]: Refreshing passwd entry cache Dec 16 03:50:54.198447 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 03:50:54.198904 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 03:50:54.202415 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 03:50:54.203786 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 03:50:54.212756 extend-filesystems[1606]: Found /dev/vda6 Dec 16 03:50:54.225053 oslogin_cache_refresh[1607]: Failure getting users, quitting Dec 16 03:50:54.227978 jq[1622]: true Dec 16 03:50:54.228155 google_oslogin_nss_cache[1607]: oslogin_cache_refresh[1607]: Failure getting users, quitting Dec 16 03:50:54.228155 google_oslogin_nss_cache[1607]: oslogin_cache_refresh[1607]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:50:54.228155 google_oslogin_nss_cache[1607]: oslogin_cache_refresh[1607]: Refreshing group entry cache Dec 16 03:50:54.225088 oslogin_cache_refresh[1607]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:50:54.225162 oslogin_cache_refresh[1607]: Refreshing group entry cache Dec 16 03:50:54.233764 extend-filesystems[1606]: Found /dev/vda9 Dec 16 03:50:54.232846 oslogin_cache_refresh[1607]: Failure getting groups, quitting Dec 16 03:50:54.237971 google_oslogin_nss_cache[1607]: oslogin_cache_refresh[1607]: Failure getting groups, quitting Dec 16 03:50:54.237971 google_oslogin_nss_cache[1607]: oslogin_cache_refresh[1607]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:50:54.232861 oslogin_cache_refresh[1607]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:50:54.239222 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 03:50:54.241133 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 03:50:54.243621 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 03:50:54.244493 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 03:50:54.250007 extend-filesystems[1606]: Checking size of /dev/vda9 Dec 16 03:50:54.268656 update_engine[1619]: I20251216 03:50:54.265377 1619 main.cc:92] Flatcar Update Engine starting Dec 16 03:50:54.289016 jq[1637]: true Dec 16 03:50:54.303052 extend-filesystems[1606]: Resized partition /dev/vda9 Dec 16 03:50:54.310746 tar[1627]: linux-amd64/LICENSE Dec 16 03:50:54.310746 tar[1627]: linux-amd64/helm Dec 16 03:50:54.318822 extend-filesystems[1655]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 03:50:54.341593 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:50:54.345740 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Dec 16 03:50:54.344590 dbus-daemon[1603]: [system] SELinux support is enabled Dec 16 03:50:54.356851 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 03:50:54.361286 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 03:50:54.361333 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 03:50:54.362164 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 03:50:54.362192 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 03:50:54.401626 dbus-daemon[1603]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.4' (uid=244 pid=1555 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 03:50:54.409532 update_engine[1619]: I20251216 03:50:54.409288 1619 update_check_scheduler.cc:74] Next update check in 7m43s Dec 16 03:50:54.409830 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 03:50:54.411909 systemd[1]: Started update-engine.service - Update Engine. Dec 16 03:50:54.422965 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 03:50:54.507421 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 03:50:54.529103 bash[1676]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:50:54.535855 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 03:50:54.542949 systemd[1]: Starting sshkeys.service... Dec 16 03:50:54.591773 systemd-logind[1614]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 03:50:54.603543 systemd-logind[1614]: New seat seat0. Dec 16 03:50:54.606343 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 03:50:54.686264 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 03:50:54.709406 containerd[1640]: time="2025-12-16T03:50:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 03:50:54.725769 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Dec 16 03:50:54.738144 systemd-logind[1614]: Watching system buttons on /dev/input/event3 (Power Button) Dec 16 03:50:54.744932 containerd[1640]: time="2025-12-16T03:50:54.744829748Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 03:50:54.761769 extend-filesystems[1655]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 03:50:54.761769 extend-filesystems[1655]: old_desc_blocks = 1, new_desc_blocks = 7 Dec 16 03:50:54.761769 extend-filesystems[1655]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Dec 16 03:50:55.247054 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 03:50:55.048400 dbus-daemon[1603]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 03:50:55.247394 containerd[1640]: time="2025-12-16T03:50:54.841356093Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.957µs" Dec 16 03:50:55.247394 containerd[1640]: time="2025-12-16T03:50:54.841423493Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 03:50:55.247394 containerd[1640]: time="2025-12-16T03:50:54.841495793Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 03:50:55.247394 containerd[1640]: time="2025-12-16T03:50:54.841521586Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 03:50:55.247394 containerd[1640]: time="2025-12-16T03:50:54.849618738Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 03:50:55.247394 containerd[1640]: time="2025-12-16T03:50:54.849651142Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:50:55.247394 containerd[1640]: time="2025-12-16T03:50:54.849791590Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:50:55.247394 containerd[1640]: time="2025-12-16T03:50:54.849814670Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:50:55.247394 containerd[1640]: time="2025-12-16T03:50:54.850062215Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:50:55.247394 containerd[1640]: time="2025-12-16T03:50:54.850086414Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:50:55.247394 containerd[1640]: time="2025-12-16T03:50:54.850105837Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:50:55.247394 containerd[1640]: time="2025-12-16T03:50:54.850120235Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:50:54.774918 systemd-networkd[1555]: eth0: Gained IPv6LL Dec 16 03:50:55.248215 extend-filesystems[1606]: Resized filesystem in /dev/vda9 Dec 16 03:50:55.052437 dbus-daemon[1603]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1673 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 03:50:55.307197 containerd[1640]: time="2025-12-16T03:50:54.850382645Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:50:55.307197 containerd[1640]: time="2025-12-16T03:50:54.850405543Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 03:50:55.307197 containerd[1640]: time="2025-12-16T03:50:54.850541680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 03:50:55.307197 containerd[1640]: time="2025-12-16T03:50:54.853176001Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:50:55.307197 containerd[1640]: time="2025-12-16T03:50:54.853230478Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:50:55.307197 containerd[1640]: time="2025-12-16T03:50:54.853250499Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 03:50:55.307197 containerd[1640]: time="2025-12-16T03:50:54.855213075Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 03:50:55.307197 containerd[1640]: time="2025-12-16T03:50:54.860336779Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 03:50:55.307197 containerd[1640]: time="2025-12-16T03:50:54.860460590Z" level=info msg="metadata content store policy set" policy=shared Dec 16 03:50:55.307197 containerd[1640]: time="2025-12-16T03:50:54.868249990Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 03:50:55.307197 containerd[1640]: time="2025-12-16T03:50:54.881120379Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:50:55.307197 containerd[1640]: time="2025-12-16T03:50:54.881254442Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:50:55.307197 containerd[1640]: time="2025-12-16T03:50:54.881319025Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 03:50:54.780002 systemd-timesyncd[1531]: Network configuration changed, trying to establish connection. Dec 16 03:50:55.308284 containerd[1640]: time="2025-12-16T03:50:54.881340845Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 03:50:55.308284 containerd[1640]: time="2025-12-16T03:50:54.881385266Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 03:50:55.308284 containerd[1640]: time="2025-12-16T03:50:54.881409005Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 03:50:55.308284 containerd[1640]: time="2025-12-16T03:50:54.881426168Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 03:50:55.308284 containerd[1640]: time="2025-12-16T03:50:54.881445337Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 03:50:55.308284 containerd[1640]: time="2025-12-16T03:50:54.881469894Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 03:50:55.308284 containerd[1640]: time="2025-12-16T03:50:54.881490905Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 03:50:55.308284 containerd[1640]: time="2025-12-16T03:50:54.881514475Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 03:50:55.308284 containerd[1640]: time="2025-12-16T03:50:54.881531334Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 03:50:55.308284 containerd[1640]: time="2025-12-16T03:50:54.881550727Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 03:50:55.308284 containerd[1640]: time="2025-12-16T03:50:54.884766159Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 03:50:55.308284 containerd[1640]: time="2025-12-16T03:50:54.884817218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 03:50:55.308284 containerd[1640]: time="2025-12-16T03:50:54.884844166Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 03:50:55.308284 containerd[1640]: time="2025-12-16T03:50:54.884863114Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 03:50:55.021280 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 03:50:55.312964 containerd[1640]: time="2025-12-16T03:50:54.884884869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 03:50:55.312964 containerd[1640]: time="2025-12-16T03:50:54.884902936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 03:50:55.312964 containerd[1640]: time="2025-12-16T03:50:54.884921293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 03:50:55.312964 containerd[1640]: time="2025-12-16T03:50:54.884938356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 03:50:55.312964 containerd[1640]: time="2025-12-16T03:50:54.884956629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 03:50:55.312964 containerd[1640]: time="2025-12-16T03:50:54.884974927Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 03:50:55.312964 containerd[1640]: time="2025-12-16T03:50:54.884991924Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 03:50:55.312964 containerd[1640]: time="2025-12-16T03:50:54.885033489Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 03:50:55.312964 containerd[1640]: time="2025-12-16T03:50:54.885126851Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 03:50:55.312964 containerd[1640]: time="2025-12-16T03:50:54.885158955Z" level=info msg="Start snapshots syncer" Dec 16 03:50:55.312964 containerd[1640]: time="2025-12-16T03:50:54.885207134Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 03:50:55.150605 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 03:50:55.313395 containerd[1640]: time="2025-12-16T03:50:54.885582370Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 03:50:55.313395 containerd[1640]: time="2025-12-16T03:50:54.885653694Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 03:50:55.215571 locksmithd[1675]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 03:50:55.313997 containerd[1640]: time="2025-12-16T03:50:54.892696283Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 03:50:55.313997 containerd[1640]: time="2025-12-16T03:50:54.894044819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 03:50:55.313997 containerd[1640]: time="2025-12-16T03:50:54.894117861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 03:50:55.313997 containerd[1640]: time="2025-12-16T03:50:54.894822900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 03:50:55.313997 containerd[1640]: time="2025-12-16T03:50:54.894849864Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 03:50:55.313997 containerd[1640]: time="2025-12-16T03:50:54.894901322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 03:50:55.313997 containerd[1640]: time="2025-12-16T03:50:54.894924273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 03:50:55.313997 containerd[1640]: time="2025-12-16T03:50:54.894942380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 03:50:55.313997 containerd[1640]: time="2025-12-16T03:50:54.895499301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 03:50:55.313997 containerd[1640]: time="2025-12-16T03:50:54.895530378Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 03:50:55.313997 containerd[1640]: time="2025-12-16T03:50:54.895619154Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:50:55.313997 containerd[1640]: time="2025-12-16T03:50:54.896865416Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:50:55.313997 containerd[1640]: time="2025-12-16T03:50:54.896885041Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:50:55.249793 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:54.900871674Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:54.900897991Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:54.900939708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:54.900962220Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:54.900991245Z" level=info msg="runtime interface created" Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:54.901023701Z" level=info msg="created NRI interface" Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:54.901042392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:54.901063629Z" level=info msg="Connect containerd service" Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:54.901121187Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:54.908764784Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:55.212250865Z" level=info msg="Start subscribing containerd event" Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:55.212354158Z" level=info msg="Start recovering state" Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:55.212563230Z" level=info msg="Start event monitor" Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:55.212590540Z" level=info msg="Start cni network conf syncer for default" Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:55.212604660Z" level=info msg="Start streaming server" Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:55.212622878Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 03:50:55.314501 containerd[1640]: time="2025-12-16T03:50:55.212629750Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 03:50:55.310790 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 03:50:55.318979 containerd[1640]: time="2025-12-16T03:50:55.212639450Z" level=info msg="runtime interface starting up..." Dec 16 03:50:55.318979 containerd[1640]: time="2025-12-16T03:50:55.213003683Z" level=info msg="starting plugins..." Dec 16 03:50:55.318979 containerd[1640]: time="2025-12-16T03:50:55.213051428Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 03:50:55.318979 containerd[1640]: time="2025-12-16T03:50:55.213786068Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 03:50:55.318979 containerd[1640]: time="2025-12-16T03:50:55.215281929Z" level=info msg="containerd successfully booted in 0.508746s" Dec 16 03:50:55.367089 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 03:50:55.367590 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 03:50:55.383392 sshd_keygen[1641]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 03:50:55.421637 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:50:55.432498 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 03:50:55.496193 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 03:50:55.501209 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 03:50:55.506429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:50:55.512663 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 03:50:55.521651 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 03:50:55.526328 systemd[1]: Started sshd@0-10.230.36.234:22-139.178.89.65:33742.service - OpenSSH per-connection server daemon (139.178.89.65:33742). Dec 16 03:50:55.581628 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 03:50:55.587907 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 03:50:55.597293 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 03:50:55.654659 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 03:50:55.661781 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 03:50:55.667143 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 03:50:55.669273 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 03:50:55.687580 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 03:50:55.733202 tar[1627]: linux-amd64/README.md Dec 16 03:50:55.761008 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 03:50:55.767410 polkitd[1723]: Started polkitd version 126 Dec 16 03:50:55.773926 polkitd[1723]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 03:50:55.774635 polkitd[1723]: Loading rules from directory /run/polkit-1/rules.d Dec 16 03:50:55.774727 polkitd[1723]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 03:50:55.775088 polkitd[1723]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 03:50:55.775127 polkitd[1723]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 03:50:55.775193 polkitd[1723]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 03:50:55.776384 polkitd[1723]: Finished loading, compiling and executing 2 rules Dec 16 03:50:55.777238 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 03:50:55.777415 dbus-daemon[1603]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 03:50:55.778600 polkitd[1723]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 03:50:55.791965 systemd-hostnamed[1673]: Hostname set to (static) Dec 16 03:50:55.798803 systemd-timesyncd[1531]: Network configuration changed, trying to establish connection. Dec 16 03:50:55.800253 systemd-networkd[1555]: eth0: Ignoring DHCPv6 address 2a02:1348:179:893a:24:19ff:fee6:24ea/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:893a:24:19ff:fee6:24ea/64 assigned by NDisc. Dec 16 03:50:55.800388 systemd-networkd[1555]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 16 03:50:56.084750 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 03:50:56.210749 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 03:50:56.417478 sshd[1724]: Accepted publickey for core from 139.178.89.65 port 33742 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:50:56.420447 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:50:56.433249 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 03:50:56.437829 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 03:50:56.455278 systemd-logind[1614]: New session 1 of user core. Dec 16 03:50:56.474455 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 03:50:56.481131 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 03:50:56.505502 (systemd)[1760]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:50:56.511026 systemd-logind[1614]: New session 2 of user core. Dec 16 03:50:56.707954 systemd[1760]: Queued start job for default target default.target. Dec 16 03:50:56.721136 systemd[1760]: Created slice app.slice - User Application Slice. Dec 16 03:50:56.721414 systemd[1760]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 03:50:56.721448 systemd[1760]: Reached target paths.target - Paths. Dec 16 03:50:56.721537 systemd[1760]: Reached target timers.target - Timers. Dec 16 03:50:56.724691 systemd[1760]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 03:50:56.726143 systemd[1760]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 03:50:56.754920 systemd[1760]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 03:50:56.755280 systemd[1760]: Reached target sockets.target - Sockets. Dec 16 03:50:56.757531 systemd[1760]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 03:50:56.757820 systemd[1760]: Reached target basic.target - Basic System. Dec 16 03:50:56.758038 systemd[1760]: Reached target default.target - Main User Target. Dec 16 03:50:56.758232 systemd[1760]: Startup finished in 236ms. Dec 16 03:50:56.758252 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 03:50:56.768167 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 03:50:56.802112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:50:56.817506 (kubelet)[1777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:50:57.228207 systemd[1]: Started sshd@1-10.230.36.234:22-139.178.89.65:33754.service - OpenSSH per-connection server daemon (139.178.89.65:33754). Dec 16 03:50:57.399646 systemd-timesyncd[1531]: Network configuration changed, trying to establish connection. Dec 16 03:50:57.528025 kubelet[1777]: E1216 03:50:57.527833 1777 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:50:57.531574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:50:57.531870 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:50:57.532551 systemd[1]: kubelet.service: Consumed 1.112s CPU time, 268.6M memory peak. Dec 16 03:50:58.021976 sshd[1785]: Accepted publickey for core from 139.178.89.65 port 33754 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:50:58.024135 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:50:58.032802 systemd-logind[1614]: New session 3 of user core. Dec 16 03:50:58.044104 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 03:50:58.097764 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 03:50:58.226754 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 03:50:58.465898 sshd[1790]: Connection closed by 139.178.89.65 port 33754 Dec 16 03:50:58.466671 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Dec 16 03:50:58.472165 systemd[1]: sshd@1-10.230.36.234:22-139.178.89.65:33754.service: Deactivated successfully. Dec 16 03:50:58.474990 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 03:50:58.478124 systemd-logind[1614]: Session 3 logged out. Waiting for processes to exit. Dec 16 03:50:58.479330 systemd-logind[1614]: Removed session 3. Dec 16 03:50:58.627822 systemd[1]: Started sshd@2-10.230.36.234:22-139.178.89.65:33762.service - OpenSSH per-connection server daemon (139.178.89.65:33762). Dec 16 03:50:59.407410 sshd[1798]: Accepted publickey for core from 139.178.89.65 port 33762 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:50:59.409115 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:50:59.416204 systemd-logind[1614]: New session 4 of user core. Dec 16 03:50:59.426055 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 03:50:59.847024 sshd[1802]: Connection closed by 139.178.89.65 port 33762 Dec 16 03:50:59.848051 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Dec 16 03:50:59.853647 systemd[1]: sshd@2-10.230.36.234:22-139.178.89.65:33762.service: Deactivated successfully. Dec 16 03:50:59.856377 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 03:50:59.857777 systemd-logind[1614]: Session 4 logged out. Waiting for processes to exit. Dec 16 03:50:59.859660 systemd-logind[1614]: Removed session 4. Dec 16 03:51:00.764814 login[1740]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:51:00.772665 systemd-logind[1614]: New session 5 of user core. Dec 16 03:51:00.784132 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 03:51:01.102672 login[1741]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:51:01.110801 systemd-logind[1614]: New session 6 of user core. Dec 16 03:51:01.120082 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 03:51:02.110768 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 03:51:02.120311 coreos-metadata[1601]: Dec 16 03:51:02.120 WARN failed to locate config-drive, using the metadata service API instead Dec 16 03:51:02.146454 coreos-metadata[1601]: Dec 16 03:51:02.146 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 16 03:51:02.155388 coreos-metadata[1601]: Dec 16 03:51:02.155 INFO Fetch failed with 404: resource not found Dec 16 03:51:02.155388 coreos-metadata[1601]: Dec 16 03:51:02.155 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 03:51:02.155980 coreos-metadata[1601]: Dec 16 03:51:02.155 INFO Fetch successful Dec 16 03:51:02.156139 coreos-metadata[1601]: Dec 16 03:51:02.156 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 16 03:51:02.173830 coreos-metadata[1601]: Dec 16 03:51:02.173 INFO Fetch successful Dec 16 03:51:02.173830 coreos-metadata[1601]: Dec 16 03:51:02.173 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 16 03:51:02.189326 coreos-metadata[1601]: Dec 16 03:51:02.188 INFO Fetch successful Dec 16 03:51:02.189326 coreos-metadata[1601]: Dec 16 03:51:02.189 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 16 03:51:02.203657 coreos-metadata[1601]: Dec 16 03:51:02.203 INFO Fetch successful Dec 16 03:51:02.203931 coreos-metadata[1601]: Dec 16 03:51:02.203 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 16 03:51:02.222849 coreos-metadata[1601]: Dec 16 03:51:02.222 INFO Fetch successful Dec 16 03:51:02.239754 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 03:51:02.253084 coreos-metadata[1693]: Dec 16 03:51:02.253 WARN failed to locate config-drive, using the metadata service API instead Dec 16 03:51:02.261960 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 03:51:02.264154 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 03:51:02.278363 coreos-metadata[1693]: Dec 16 03:51:02.278 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 16 03:51:02.305559 coreos-metadata[1693]: Dec 16 03:51:02.305 INFO Fetch successful Dec 16 03:51:02.305921 coreos-metadata[1693]: Dec 16 03:51:02.305 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 03:51:02.334036 coreos-metadata[1693]: Dec 16 03:51:02.333 INFO Fetch successful Dec 16 03:51:02.344583 unknown[1693]: wrote ssh authorized keys file for user: core Dec 16 03:51:02.369631 update-ssh-keys[1844]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:51:02.371571 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 03:51:02.374581 systemd[1]: Finished sshkeys.service. Dec 16 03:51:02.376081 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 03:51:02.377845 systemd[1]: Startup finished in 3.521s (kernel) + 17.417s (initrd) + 12.165s (userspace) = 33.104s. Dec 16 03:51:07.766008 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 03:51:07.768994 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:51:08.089528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:51:08.105488 (kubelet)[1855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:51:08.167949 kubelet[1855]: E1216 03:51:08.167819 1855 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:51:08.173922 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:51:08.174233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:51:08.175455 systemd[1]: kubelet.service: Consumed 225ms CPU time, 108.4M memory peak. Dec 16 03:51:10.009464 systemd[1]: Started sshd@3-10.230.36.234:22-139.178.89.65:57458.service - OpenSSH per-connection server daemon (139.178.89.65:57458). Dec 16 03:51:10.806665 sshd[1863]: Accepted publickey for core from 139.178.89.65 port 57458 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:51:10.808791 sshd-session[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:51:10.818052 systemd-logind[1614]: New session 7 of user core. Dec 16 03:51:10.826080 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 03:51:11.249478 sshd[1867]: Connection closed by 139.178.89.65 port 57458 Dec 16 03:51:11.250528 sshd-session[1863]: pam_unix(sshd:session): session closed for user core Dec 16 03:51:11.258119 systemd[1]: sshd@3-10.230.36.234:22-139.178.89.65:57458.service: Deactivated successfully. Dec 16 03:51:11.260807 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 03:51:11.262123 systemd-logind[1614]: Session 7 logged out. Waiting for processes to exit. Dec 16 03:51:11.264208 systemd-logind[1614]: Removed session 7. Dec 16 03:51:11.420583 systemd[1]: Started sshd@4-10.230.36.234:22-139.178.89.65:59542.service - OpenSSH per-connection server daemon (139.178.89.65:59542). Dec 16 03:51:12.207561 sshd[1873]: Accepted publickey for core from 139.178.89.65 port 59542 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:51:12.209743 sshd-session[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:51:12.217013 systemd-logind[1614]: New session 8 of user core. Dec 16 03:51:12.231055 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 03:51:12.647682 sshd[1877]: Connection closed by 139.178.89.65 port 59542 Dec 16 03:51:12.649435 sshd-session[1873]: pam_unix(sshd:session): session closed for user core Dec 16 03:51:12.657026 systemd[1]: sshd@4-10.230.36.234:22-139.178.89.65:59542.service: Deactivated successfully. Dec 16 03:51:12.660380 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 03:51:12.662391 systemd-logind[1614]: Session 8 logged out. Waiting for processes to exit. Dec 16 03:51:12.664966 systemd-logind[1614]: Removed session 8. Dec 16 03:51:12.809594 systemd[1]: Started sshd@5-10.230.36.234:22-139.178.89.65:59550.service - OpenSSH per-connection server daemon (139.178.89.65:59550). Dec 16 03:51:13.594615 sshd[1883]: Accepted publickey for core from 139.178.89.65 port 59550 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:51:13.596462 sshd-session[1883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:51:13.605323 systemd-logind[1614]: New session 9 of user core. Dec 16 03:51:13.612030 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 03:51:14.039970 sshd[1887]: Connection closed by 139.178.89.65 port 59550 Dec 16 03:51:14.041056 sshd-session[1883]: pam_unix(sshd:session): session closed for user core Dec 16 03:51:14.047385 systemd-logind[1614]: Session 9 logged out. Waiting for processes to exit. Dec 16 03:51:14.048138 systemd[1]: sshd@5-10.230.36.234:22-139.178.89.65:59550.service: Deactivated successfully. Dec 16 03:51:14.051281 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 03:51:14.053730 systemd-logind[1614]: Removed session 9. Dec 16 03:51:14.197437 systemd[1]: Started sshd@6-10.230.36.234:22-139.178.89.65:59566.service - OpenSSH per-connection server daemon (139.178.89.65:59566). Dec 16 03:51:14.992202 sshd[1893]: Accepted publickey for core from 139.178.89.65 port 59566 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:51:14.994616 sshd-session[1893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:51:15.001882 systemd-logind[1614]: New session 10 of user core. Dec 16 03:51:15.018401 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 03:51:15.309948 sudo[1898]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 03:51:15.310475 sudo[1898]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:51:15.332505 sudo[1898]: pam_unix(sudo:session): session closed for user root Dec 16 03:51:15.479747 sshd[1897]: Connection closed by 139.178.89.65 port 59566 Dec 16 03:51:15.478722 sshd-session[1893]: pam_unix(sshd:session): session closed for user core Dec 16 03:51:15.487535 systemd[1]: sshd@6-10.230.36.234:22-139.178.89.65:59566.service: Deactivated successfully. Dec 16 03:51:15.490431 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 03:51:15.492151 systemd-logind[1614]: Session 10 logged out. Waiting for processes to exit. Dec 16 03:51:15.494428 systemd-logind[1614]: Removed session 10. Dec 16 03:51:15.641805 systemd[1]: Started sshd@7-10.230.36.234:22-139.178.89.65:59568.service - OpenSSH per-connection server daemon (139.178.89.65:59568). Dec 16 03:51:16.447246 sshd[1905]: Accepted publickey for core from 139.178.89.65 port 59568 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:51:16.449227 sshd-session[1905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:51:16.458574 systemd-logind[1614]: New session 11 of user core. Dec 16 03:51:16.463980 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 03:51:16.752582 sudo[1911]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 03:51:16.753167 sudo[1911]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:51:16.760751 sudo[1911]: pam_unix(sudo:session): session closed for user root Dec 16 03:51:16.771208 sudo[1910]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 03:51:16.771759 sudo[1910]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:51:16.783380 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:51:16.839864 kernel: kauditd_printk_skb: 177 callbacks suppressed Dec 16 03:51:16.840117 kernel: audit: type=1305 audit(1765857076.834:223): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:51:16.834000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:51:16.840413 augenrules[1935]: No rules Dec 16 03:51:16.841500 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:51:16.834000 audit[1935]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd85252af0 a2=420 a3=0 items=0 ppid=1916 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:16.842467 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:51:16.844164 kernel: audit: type=1300 audit(1765857076.834:223): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd85252af0 a2=420 a3=0 items=0 ppid=1916 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:16.834000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:51:16.849435 sudo[1910]: pam_unix(sudo:session): session closed for user root Dec 16 03:51:16.852814 kernel: audit: type=1327 audit(1765857076.834:223): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:51:16.852904 kernel: audit: type=1130 audit(1765857076.843:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:16.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:16.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:16.857193 kernel: audit: type=1131 audit(1765857076.843:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:16.849000 audit[1910]: USER_END pid=1910 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:51:16.861302 kernel: audit: type=1106 audit(1765857076.849:226): pid=1910 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:51:16.849000 audit[1910]: CRED_DISP pid=1910 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:51:16.865509 kernel: audit: type=1104 audit(1765857076.849:227): pid=1910 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:51:16.997188 sshd[1909]: Connection closed by 139.178.89.65 port 59568 Dec 16 03:51:16.996598 sshd-session[1905]: pam_unix(sshd:session): session closed for user core Dec 16 03:51:16.999000 audit[1905]: USER_END pid=1905 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:51:17.007811 kernel: audit: type=1106 audit(1765857076.999:228): pid=1905 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:51:17.000000 audit[1905]: CRED_DISP pid=1905 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:51:17.007982 systemd[1]: sshd@7-10.230.36.234:22-139.178.89.65:59568.service: Deactivated successfully. Dec 16 03:51:17.012895 kernel: audit: type=1104 audit(1765857077.000:229): pid=1905 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:51:17.011609 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 03:51:17.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.36.234:22-139.178.89.65:59568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:17.017791 kernel: audit: type=1131 audit(1765857077.007:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.36.234:22-139.178.89.65:59568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:17.016905 systemd-logind[1614]: Session 11 logged out. Waiting for processes to exit. Dec 16 03:51:17.019934 systemd-logind[1614]: Removed session 11. Dec 16 03:51:17.156160 systemd[1]: Started sshd@8-10.230.36.234:22-139.178.89.65:59584.service - OpenSSH per-connection server daemon (139.178.89.65:59584). Dec 16 03:51:17.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.36.234:22-139.178.89.65:59584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:17.963000 audit[1944]: USER_ACCT pid=1944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:51:17.964140 sshd[1944]: Accepted publickey for core from 139.178.89.65 port 59584 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:51:17.964000 audit[1944]: CRED_ACQ pid=1944 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:51:17.964000 audit[1944]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc1063bd0 a2=3 a3=0 items=0 ppid=1 pid=1944 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:17.964000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:51:17.966349 sshd-session[1944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:51:17.974669 systemd-logind[1614]: New session 12 of user core. Dec 16 03:51:17.981980 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 03:51:17.986000 audit[1944]: USER_START pid=1944 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:51:17.989000 audit[1948]: CRED_ACQ pid=1948 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:51:18.265801 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 03:51:18.268000 audit[1949]: USER_ACCT pid=1949 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:51:18.268000 audit[1949]: CRED_REFR pid=1949 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:51:18.269000 audit[1949]: USER_START pid=1949 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:51:18.269463 sudo[1949]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 03:51:18.271011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:51:18.270049 sudo[1949]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:51:18.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:18.636010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:51:18.649355 (kubelet)[1966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:51:18.716322 kubelet[1966]: E1216 03:51:18.716233 1966 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:51:18.719008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:51:18.719267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:51:18.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:51:18.720076 systemd[1]: kubelet.service: Consumed 246ms CPU time, 108.1M memory peak. Dec 16 03:51:19.077510 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 03:51:19.091237 (dockerd)[1983]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 03:51:19.501823 dockerd[1983]: time="2025-12-16T03:51:19.501410981Z" level=info msg="Starting up" Dec 16 03:51:19.503512 dockerd[1983]: time="2025-12-16T03:51:19.503467948Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 03:51:19.520286 dockerd[1983]: time="2025-12-16T03:51:19.520162004Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 03:51:19.550632 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3281047276-merged.mount: Deactivated successfully. Dec 16 03:51:19.567127 systemd[1]: var-lib-docker-metacopy\x2dcheck2388630876-merged.mount: Deactivated successfully. Dec 16 03:51:19.589089 dockerd[1983]: time="2025-12-16T03:51:19.589039850Z" level=info msg="Loading containers: start." Dec 16 03:51:19.606790 kernel: Initializing XFRM netlink socket Dec 16 03:51:19.689000 audit[2035]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.689000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe2311f9f0 a2=0 a3=0 items=0 ppid=1983 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.689000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:51:19.692000 audit[2037]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.692000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc8196d2f0 a2=0 a3=0 items=0 ppid=1983 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.692000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:51:19.695000 audit[2039]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.695000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff623a55d0 a2=0 a3=0 items=0 ppid=1983 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.695000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:51:19.698000 audit[2041]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.698000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6173c710 a2=0 a3=0 items=0 ppid=1983 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.698000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:51:19.701000 audit[2043]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.701000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdc67073c0 a2=0 a3=0 items=0 ppid=1983 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.701000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:51:19.704000 audit[2045]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.704000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc3d373f20 a2=0 a3=0 items=0 ppid=1983 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.704000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:51:19.708000 audit[2047]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.708000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd8f4cfbd0 a2=0 a3=0 items=0 ppid=1983 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.708000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:51:19.711000 audit[2049]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.711000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdbd35f060 a2=0 a3=0 items=0 ppid=1983 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.711000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:51:19.777000 audit[2052]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.777000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffd1a6a03f0 a2=0 a3=0 items=0 ppid=1983 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.777000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 03:51:19.780000 audit[2054]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.780000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffed7293a90 a2=0 a3=0 items=0 ppid=1983 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.780000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:51:19.784000 audit[2056]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.784000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffea36729c0 a2=0 a3=0 items=0 ppid=1983 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.784000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:51:19.787000 audit[2058]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.787000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc88c94e60 a2=0 a3=0 items=0 ppid=1983 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.787000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:51:19.790000 audit[2060]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.790000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc4d94cfc0 a2=0 a3=0 items=0 ppid=1983 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.790000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:51:19.846000 audit[2090]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.846000 audit[2090]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff4c37c230 a2=0 a3=0 items=0 ppid=1983 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.846000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:51:19.850000 audit[2092]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.850000 audit[2092]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff36c16c00 a2=0 a3=0 items=0 ppid=1983 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.850000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:51:19.853000 audit[2094]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.853000 audit[2094]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbc06f3a0 a2=0 a3=0 items=0 ppid=1983 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.853000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:51:19.856000 audit[2096]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.856000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd895ec1d0 a2=0 a3=0 items=0 ppid=1983 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.856000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:51:19.859000 audit[2098]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.859000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd930cf6c0 a2=0 a3=0 items=0 ppid=1983 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.859000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:51:19.862000 audit[2100]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.862000 audit[2100]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdeb883390 a2=0 a3=0 items=0 ppid=1983 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.862000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:51:19.865000 audit[2102]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.865000 audit[2102]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff0eb82bb0 a2=0 a3=0 items=0 ppid=1983 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.865000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:51:19.869000 audit[2104]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.869000 audit[2104]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd8a5423e0 a2=0 a3=0 items=0 ppid=1983 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.869000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:51:19.873000 audit[2106]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.873000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffeb10de7e0 a2=0 a3=0 items=0 ppid=1983 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.873000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 03:51:19.876000 audit[2108]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.876000 audit[2108]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd65067cf0 a2=0 a3=0 items=0 ppid=1983 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.876000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:51:19.879000 audit[2110]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.879000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe73de7620 a2=0 a3=0 items=0 ppid=1983 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.879000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:51:19.883000 audit[2112]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.883000 audit[2112]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff3c7eb820 a2=0 a3=0 items=0 ppid=1983 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.883000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:51:19.886000 audit[2114]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.886000 audit[2114]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff0780bc20 a2=0 a3=0 items=0 ppid=1983 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.886000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:51:19.895000 audit[2119]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.895000 audit[2119]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff905cce00 a2=0 a3=0 items=0 ppid=1983 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.895000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:51:19.899000 audit[2121]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.899000 audit[2121]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe5e03aa40 a2=0 a3=0 items=0 ppid=1983 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.899000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:51:19.902000 audit[2123]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.902000 audit[2123]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcb777a590 a2=0 a3=0 items=0 ppid=1983 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.902000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:51:19.906000 audit[2125]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.906000 audit[2125]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcd8443a10 a2=0 a3=0 items=0 ppid=1983 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.906000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:51:19.909000 audit[2127]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.909000 audit[2127]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe6ad15f20 a2=0 a3=0 items=0 ppid=1983 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.909000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:51:19.913000 audit[2129]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:19.913000 audit[2129]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffed804430 a2=0 a3=0 items=0 ppid=1983 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.913000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:51:19.932010 systemd-timesyncd[1531]: Network configuration changed, trying to establish connection. Dec 16 03:51:19.952000 audit[2133]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.952000 audit[2133]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffcfbd2ffc0 a2=0 a3=0 items=0 ppid=1983 pid=2133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.952000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 03:51:19.956000 audit[2135]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.956000 audit[2135]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd20aa8ee0 a2=0 a3=0 items=0 ppid=1983 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.956000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 03:51:19.970000 audit[2143]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.970000 audit[2143]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffdb3314000 a2=0 a3=0 items=0 ppid=1983 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.970000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 03:51:19.983000 audit[2149]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.983000 audit[2149]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffda7c9d1a0 a2=0 a3=0 items=0 ppid=1983 pid=2149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.983000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 03:51:19.987000 audit[2151]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.987000 audit[2151]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffee8207070 a2=0 a3=0 items=0 ppid=1983 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.987000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 03:51:19.991000 audit[2153]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.991000 audit[2153]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd66cdbb00 a2=0 a3=0 items=0 ppid=1983 pid=2153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.991000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 03:51:19.994000 audit[2155]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.994000 audit[2155]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe2a790c50 a2=0 a3=0 items=0 ppid=1983 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.994000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:51:19.998000 audit[2157]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:19.998000 audit[2157]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff3bc997a0 a2=0 a3=0 items=0 ppid=1983 pid=2157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:19.998000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 03:51:19.999556 systemd-networkd[1555]: docker0: Link UP Dec 16 03:51:20.004809 dockerd[1983]: time="2025-12-16T03:51:20.004752297Z" level=info msg="Loading containers: done." Dec 16 03:51:20.033542 dockerd[1983]: time="2025-12-16T03:51:20.033373320Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 03:51:20.033542 dockerd[1983]: time="2025-12-16T03:51:20.033492763Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 03:51:20.033826 dockerd[1983]: time="2025-12-16T03:51:20.033649160Z" level=info msg="Initializing buildkit" Dec 16 03:51:20.063840 dockerd[1983]: time="2025-12-16T03:51:20.063797556Z" level=info msg="Completed buildkit initialization" Dec 16 03:51:20.075070 dockerd[1983]: time="2025-12-16T03:51:20.074972038Z" level=info msg="Daemon has completed initialization" Dec 16 03:51:20.076699 dockerd[1983]: time="2025-12-16T03:51:20.075229894Z" level=info msg="API listen on /run/docker.sock" Dec 16 03:51:20.075475 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 03:51:20.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:20.144882 systemd-timesyncd[1531]: Contacted time server [2a01:7e00::f03c:94ff:fe78:a7d1]:123 (2.flatcar.pool.ntp.org). Dec 16 03:51:20.145195 systemd-timesyncd[1531]: Initial clock synchronization to Tue 2025-12-16 03:51:20.539282 UTC. Dec 16 03:51:21.262780 containerd[1640]: time="2025-12-16T03:51:21.262576991Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 03:51:22.227471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3318368993.mount: Deactivated successfully. Dec 16 03:51:24.467020 containerd[1640]: time="2025-12-16T03:51:24.466924868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:24.468650 containerd[1640]: time="2025-12-16T03:51:24.468614497Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Dec 16 03:51:24.470516 containerd[1640]: time="2025-12-16T03:51:24.469293214Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:24.473868 containerd[1640]: time="2025-12-16T03:51:24.473829308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:24.475294 containerd[1640]: time="2025-12-16T03:51:24.475258026Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 3.21256782s" Dec 16 03:51:24.475466 containerd[1640]: time="2025-12-16T03:51:24.475437248Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 03:51:24.479100 containerd[1640]: time="2025-12-16T03:51:24.479044489Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 03:51:25.814498 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 03:51:25.822136 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 16 03:51:25.822349 kernel: audit: type=1131 audit(1765857085.814:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:25.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:25.831000 audit: BPF prog-id=61 op=UNLOAD Dec 16 03:51:25.833794 kernel: audit: type=1334 audit(1765857085.831:284): prog-id=61 op=UNLOAD Dec 16 03:51:28.765976 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 03:51:28.771439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:51:29.076983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:51:29.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:29.083831 kernel: audit: type=1130 audit(1765857089.075:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:29.092238 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:51:29.209805 kubelet[2271]: E1216 03:51:29.209585 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:51:29.213378 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:51:29.213648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:51:29.214344 systemd[1]: kubelet.service: Consumed 229ms CPU time, 110M memory peak. Dec 16 03:51:29.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:51:29.220172 kernel: audit: type=1131 audit(1765857089.213:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:51:29.654049 containerd[1640]: time="2025-12-16T03:51:29.653957975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:29.655656 containerd[1640]: time="2025-12-16T03:51:29.655346306Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26011860" Dec 16 03:51:29.656472 containerd[1640]: time="2025-12-16T03:51:29.656431111Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:29.660058 containerd[1640]: time="2025-12-16T03:51:29.660019027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:29.661711 containerd[1640]: time="2025-12-16T03:51:29.661671562Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 5.182457686s" Dec 16 03:51:29.661711 containerd[1640]: time="2025-12-16T03:51:29.661714665Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 03:51:29.663298 containerd[1640]: time="2025-12-16T03:51:29.663050684Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 03:51:32.325522 containerd[1640]: time="2025-12-16T03:51:32.325430403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:32.327223 containerd[1640]: time="2025-12-16T03:51:32.327172546Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Dec 16 03:51:32.327873 containerd[1640]: time="2025-12-16T03:51:32.327826983Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:32.331691 containerd[1640]: time="2025-12-16T03:51:32.331646652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:32.333427 containerd[1640]: time="2025-12-16T03:51:32.333384363Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.670280015s" Dec 16 03:51:32.333537 containerd[1640]: time="2025-12-16T03:51:32.333437317Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 03:51:32.334882 containerd[1640]: time="2025-12-16T03:51:32.334627826Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 03:51:35.549691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1661300424.mount: Deactivated successfully. Dec 16 03:51:38.941856 containerd[1640]: time="2025-12-16T03:51:38.941769960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:38.943702 containerd[1640]: time="2025-12-16T03:51:38.943669569Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Dec 16 03:51:38.945822 containerd[1640]: time="2025-12-16T03:51:38.945779902Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:38.948849 containerd[1640]: time="2025-12-16T03:51:38.948783998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:38.950357 containerd[1640]: time="2025-12-16T03:51:38.949694768Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 6.615005305s" Dec 16 03:51:38.950357 containerd[1640]: time="2025-12-16T03:51:38.949760253Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 03:51:38.951054 containerd[1640]: time="2025-12-16T03:51:38.951011923Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 03:51:39.266086 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 03:51:39.269806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:51:39.287785 update_engine[1619]: I20251216 03:51:39.286880 1619 update_attempter.cc:509] Updating boot flags... Dec 16 03:51:39.660072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:51:39.670523 kernel: audit: type=1130 audit(1765857099.659:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:39.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:39.692290 (kubelet)[2314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:51:39.960788 kubelet[2314]: E1216 03:51:39.960226 2314 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:51:39.964410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:51:39.964675 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:51:39.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:51:39.966055 systemd[1]: kubelet.service: Consumed 230ms CPU time, 107.9M memory peak. Dec 16 03:51:39.974823 kernel: audit: type=1131 audit(1765857099.964:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:51:40.504559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1757420472.mount: Deactivated successfully. Dec 16 03:51:42.221872 containerd[1640]: time="2025-12-16T03:51:42.221769448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:42.224434 containerd[1640]: time="2025-12-16T03:51:42.224391490Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20131491" Dec 16 03:51:42.225197 containerd[1640]: time="2025-12-16T03:51:42.225130818Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:42.230189 containerd[1640]: time="2025-12-16T03:51:42.230098321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:42.232510 containerd[1640]: time="2025-12-16T03:51:42.231623630Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.280395941s" Dec 16 03:51:42.232510 containerd[1640]: time="2025-12-16T03:51:42.231693906Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 03:51:42.233175 containerd[1640]: time="2025-12-16T03:51:42.233128501Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 03:51:43.040597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount235590317.mount: Deactivated successfully. Dec 16 03:51:43.057565 containerd[1640]: time="2025-12-16T03:51:43.057486852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:51:43.059065 containerd[1640]: time="2025-12-16T03:51:43.059007052Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=316581" Dec 16 03:51:43.059947 containerd[1640]: time="2025-12-16T03:51:43.059880150Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:51:43.064457 containerd[1640]: time="2025-12-16T03:51:43.064399824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:51:43.066236 containerd[1640]: time="2025-12-16T03:51:43.066194854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 833.017861ms" Dec 16 03:51:43.066696 containerd[1640]: time="2025-12-16T03:51:43.066635887Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 03:51:43.067461 containerd[1640]: time="2025-12-16T03:51:43.067394668Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 03:51:43.858127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3221581723.mount: Deactivated successfully. Dec 16 03:51:48.607677 containerd[1640]: time="2025-12-16T03:51:48.607593006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:48.609599 containerd[1640]: time="2025-12-16T03:51:48.609271403Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58133605" Dec 16 03:51:48.610472 containerd[1640]: time="2025-12-16T03:51:48.610431793Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:48.614461 containerd[1640]: time="2025-12-16T03:51:48.614415986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:51:48.616105 containerd[1640]: time="2025-12-16T03:51:48.616058605Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.548426699s" Dec 16 03:51:48.616279 containerd[1640]: time="2025-12-16T03:51:48.616226365Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 03:51:50.015876 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 16 03:51:50.018219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:51:50.386589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:51:50.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:50.393745 kernel: audit: type=1130 audit(1765857110.386:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:50.401771 (kubelet)[2462]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:51:50.462393 kubelet[2462]: E1216 03:51:50.462309 2462 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:51:50.466112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:51:50.466400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:51:50.467515 systemd[1]: kubelet.service: Consumed 223ms CPU time, 107.3M memory peak. Dec 16 03:51:50.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:51:50.472844 kernel: audit: type=1131 audit(1765857110.466:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:51:53.944427 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:51:53.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:53.945090 systemd[1]: kubelet.service: Consumed 223ms CPU time, 107.3M memory peak. Dec 16 03:51:53.951628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:51:53.951807 kernel: audit: type=1130 audit(1765857113.943:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:53.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:53.959782 kernel: audit: type=1131 audit(1765857113.943:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:54.005017 systemd[1]: Reload requested from client PID 2476 ('systemctl') (unit session-12.scope)... Dec 16 03:51:54.005072 systemd[1]: Reloading... Dec 16 03:51:54.156885 zram_generator::config[2527]: No configuration found. Dec 16 03:51:54.516679 systemd[1]: Reloading finished in 510 ms. Dec 16 03:51:54.553000 audit: BPF prog-id=65 op=LOAD Dec 16 03:51:54.556766 kernel: audit: type=1334 audit(1765857114.553:293): prog-id=65 op=LOAD Dec 16 03:51:54.556845 kernel: audit: type=1334 audit(1765857114.553:294): prog-id=66 op=LOAD Dec 16 03:51:54.553000 audit: BPF prog-id=66 op=LOAD Dec 16 03:51:54.553000 audit: BPF prog-id=54 op=UNLOAD Dec 16 03:51:54.553000 audit: BPF prog-id=55 op=UNLOAD Dec 16 03:51:54.567123 kernel: audit: type=1334 audit(1765857114.553:295): prog-id=54 op=UNLOAD Dec 16 03:51:54.567217 kernel: audit: type=1334 audit(1765857114.553:296): prog-id=55 op=UNLOAD Dec 16 03:51:54.567279 kernel: audit: type=1334 audit(1765857114.556:297): prog-id=67 op=LOAD Dec 16 03:51:54.567331 kernel: audit: type=1334 audit(1765857114.556:298): prog-id=56 op=UNLOAD Dec 16 03:51:54.556000 audit: BPF prog-id=67 op=LOAD Dec 16 03:51:54.556000 audit: BPF prog-id=56 op=UNLOAD Dec 16 03:51:54.565000 audit: BPF prog-id=68 op=LOAD Dec 16 03:51:54.565000 audit: BPF prog-id=57 op=UNLOAD Dec 16 03:51:54.567000 audit: BPF prog-id=69 op=LOAD Dec 16 03:51:54.567000 audit: BPF prog-id=51 op=UNLOAD Dec 16 03:51:54.567000 audit: BPF prog-id=70 op=LOAD Dec 16 03:51:54.567000 audit: BPF prog-id=71 op=LOAD Dec 16 03:51:54.567000 audit: BPF prog-id=52 op=UNLOAD Dec 16 03:51:54.567000 audit: BPF prog-id=53 op=UNLOAD Dec 16 03:51:54.570000 audit: BPF prog-id=72 op=LOAD Dec 16 03:51:54.573000 audit: BPF prog-id=41 op=UNLOAD Dec 16 03:51:54.573000 audit: BPF prog-id=73 op=LOAD Dec 16 03:51:54.573000 audit: BPF prog-id=74 op=LOAD Dec 16 03:51:54.573000 audit: BPF prog-id=42 op=UNLOAD Dec 16 03:51:54.573000 audit: BPF prog-id=43 op=UNLOAD Dec 16 03:51:54.575000 audit: BPF prog-id=75 op=LOAD Dec 16 03:51:54.575000 audit: BPF prog-id=58 op=UNLOAD Dec 16 03:51:54.577000 audit: BPF prog-id=76 op=LOAD Dec 16 03:51:54.577000 audit: BPF prog-id=77 op=LOAD Dec 16 03:51:54.577000 audit: BPF prog-id=59 op=UNLOAD Dec 16 03:51:54.577000 audit: BPF prog-id=60 op=UNLOAD Dec 16 03:51:54.578000 audit: BPF prog-id=78 op=LOAD Dec 16 03:51:54.578000 audit: BPF prog-id=44 op=UNLOAD Dec 16 03:51:54.579000 audit: BPF prog-id=79 op=LOAD Dec 16 03:51:54.579000 audit: BPF prog-id=80 op=LOAD Dec 16 03:51:54.579000 audit: BPF prog-id=45 op=UNLOAD Dec 16 03:51:54.579000 audit: BPF prog-id=46 op=UNLOAD Dec 16 03:51:54.580000 audit: BPF prog-id=81 op=LOAD Dec 16 03:51:54.580000 audit: BPF prog-id=50 op=UNLOAD Dec 16 03:51:54.581000 audit: BPF prog-id=82 op=LOAD Dec 16 03:51:54.581000 audit: BPF prog-id=64 op=UNLOAD Dec 16 03:51:54.583000 audit: BPF prog-id=83 op=LOAD Dec 16 03:51:54.583000 audit: BPF prog-id=47 op=UNLOAD Dec 16 03:51:54.583000 audit: BPF prog-id=84 op=LOAD Dec 16 03:51:54.583000 audit: BPF prog-id=85 op=LOAD Dec 16 03:51:54.583000 audit: BPF prog-id=48 op=UNLOAD Dec 16 03:51:54.583000 audit: BPF prog-id=49 op=UNLOAD Dec 16 03:51:54.604434 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 03:51:54.604617 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 03:51:54.605126 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:51:54.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:51:54.605214 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.2M memory peak. Dec 16 03:51:54.607873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:51:54.935377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:51:54.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:51:54.956162 (kubelet)[2591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:51:55.080454 kubelet[2591]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:51:55.080454 kubelet[2591]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:51:55.080454 kubelet[2591]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:51:55.083746 kubelet[2591]: I1216 03:51:55.083078 2591 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:51:56.157254 kubelet[2591]: I1216 03:51:56.157207 2591 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 03:51:56.157254 kubelet[2591]: I1216 03:51:56.157247 2591 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:51:56.157942 kubelet[2591]: I1216 03:51:56.157533 2591 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 03:51:56.199595 kubelet[2591]: E1216 03:51:56.199548 2591 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.36.234:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.36.234:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 03:51:56.200780 kubelet[2591]: I1216 03:51:56.200615 2591 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:51:56.230226 kubelet[2591]: I1216 03:51:56.230128 2591 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:51:56.243352 kubelet[2591]: I1216 03:51:56.243274 2591 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:51:56.248267 kubelet[2591]: I1216 03:51:56.248159 2591 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:51:56.251745 kubelet[2591]: I1216 03:51:56.248227 2591 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-n64tt.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:51:56.251745 kubelet[2591]: I1216 03:51:56.251734 2591 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:51:56.252080 kubelet[2591]: I1216 03:51:56.251770 2591 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 03:51:56.253032 kubelet[2591]: I1216 03:51:56.252994 2591 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:51:56.256674 kubelet[2591]: I1216 03:51:56.256593 2591 kubelet.go:480] "Attempting to sync node with API server" Dec 16 03:51:56.256674 kubelet[2591]: I1216 03:51:56.256664 2591 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:51:56.256845 kubelet[2591]: I1216 03:51:56.256769 2591 kubelet.go:386] "Adding apiserver pod source" Dec 16 03:51:56.258852 kubelet[2591]: I1216 03:51:56.258525 2591 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:51:56.264768 kubelet[2591]: I1216 03:51:56.264735 2591 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:51:56.266936 kubelet[2591]: I1216 03:51:56.266903 2591 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 03:51:56.267892 kubelet[2591]: W1216 03:51:56.267869 2591 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 03:51:56.282688 kubelet[2591]: E1216 03:51:56.282541 2591 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.36.234:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.36.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 03:51:56.282688 kubelet[2591]: I1216 03:51:56.282622 2591 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:51:56.282888 kubelet[2591]: I1216 03:51:56.282854 2591 server.go:1289] "Started kubelet" Dec 16 03:51:56.283746 kubelet[2591]: E1216 03:51:56.283385 2591 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.36.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-n64tt.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.36.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 03:51:56.285287 kubelet[2591]: I1216 03:51:56.284895 2591 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:51:56.289205 kubelet[2591]: I1216 03:51:56.288180 2591 server.go:317] "Adding debug handlers to kubelet server" Dec 16 03:51:56.293877 kubelet[2591]: I1216 03:51:56.293857 2591 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:51:56.307163 kubelet[2591]: I1216 03:51:56.307087 2591 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:51:56.307594 kubelet[2591]: I1216 03:51:56.307570 2591 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:51:56.308000 audit[2606]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:56.311252 kernel: kauditd_printk_skb: 38 callbacks suppressed Dec 16 03:51:56.311352 kernel: audit: type=1325 audit(1765857116.308:337): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:56.308000 audit[2606]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcc111d850 a2=0 a3=0 items=0 ppid=2591 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:56.320756 kernel: audit: type=1300 audit(1765857116.308:337): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcc111d850 a2=0 a3=0 items=0 ppid=2591 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:56.321460 kubelet[2591]: I1216 03:51:56.321404 2591 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:51:56.308000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:51:56.327739 kernel: audit: type=1327 audit(1765857116.308:337): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:51:56.328000 audit[2608]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:56.332752 kernel: audit: type=1325 audit(1765857116.328:338): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:56.333464 kubelet[2591]: I1216 03:51:56.333436 2591 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:51:56.328000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff266c2010 a2=0 a3=0 items=0 ppid=2591 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:56.334796 kubelet[2591]: E1216 03:51:56.334766 2591 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-n64tt.gb1.brightbox.com\" not found" Dec 16 03:51:56.337163 kubelet[2591]: I1216 03:51:56.337138 2591 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:51:56.338338 kubelet[2591]: E1216 03:51:56.308368 2591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.36.234:6443/api/v1/namespaces/default/events\": dial tcp 10.230.36.234:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-n64tt.gb1.brightbox.com.188195b0fc1f9c05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-n64tt.gb1.brightbox.com,UID:srv-n64tt.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-n64tt.gb1.brightbox.com,},FirstTimestamp:2025-12-16 03:51:56.282657797 +0000 UTC m=+1.321652062,LastTimestamp:2025-12-16 03:51:56.282657797 +0000 UTC m=+1.321652062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-n64tt.gb1.brightbox.com,}" Dec 16 03:51:56.338645 kubelet[2591]: I1216 03:51:56.338624 2591 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:51:56.339221 kubelet[2591]: E1216 03:51:56.339156 2591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.36.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-n64tt.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.36.234:6443: connect: connection refused" interval="200ms" Dec 16 03:51:56.340750 kernel: audit: type=1300 audit(1765857116.328:338): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff266c2010 a2=0 a3=0 items=0 ppid=2591 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:56.340844 kernel: audit: type=1327 audit(1765857116.328:338): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:51:56.328000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:51:56.340960 kubelet[2591]: E1216 03:51:56.340628 2591 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.36.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.36.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 03:51:56.342261 kubelet[2591]: I1216 03:51:56.342235 2591 factory.go:223] Registration of the systemd container factory successfully Dec 16 03:51:56.342499 kubelet[2591]: I1216 03:51:56.342472 2591 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:51:56.343803 kernel: audit: type=1325 audit(1765857116.343:339): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:56.343000 audit[2610]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:56.344454 kubelet[2591]: E1216 03:51:56.344430 2591 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:51:56.346770 kubelet[2591]: I1216 03:51:56.345107 2591 factory.go:223] Registration of the containerd container factory successfully Dec 16 03:51:56.343000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffdf4fc290 a2=0 a3=0 items=0 ppid=2591 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:56.352745 kernel: audit: type=1300 audit(1765857116.343:339): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffdf4fc290 a2=0 a3=0 items=0 ppid=2591 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:56.343000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:51:56.354000 audit[2612]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:56.358380 kernel: audit: type=1327 audit(1765857116.343:339): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:51:56.358458 kernel: audit: type=1325 audit(1765857116.354:340): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:56.354000 audit[2612]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc2e5fdbc0 a2=0 a3=0 items=0 ppid=2591 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:56.354000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:51:56.368000 audit[2615]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:56.368000 audit[2615]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffdfb0b070 a2=0 a3=0 items=0 ppid=2591 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:56.368000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 03:51:56.369707 kubelet[2591]: I1216 03:51:56.369206 2591 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 03:51:56.370000 audit[2616]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2616 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:56.370000 audit[2616]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd745d8160 a2=0 a3=0 items=0 ppid=2591 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:56.370000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:51:56.371846 kubelet[2591]: I1216 03:51:56.371823 2591 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 03:51:56.371964 kubelet[2591]: I1216 03:51:56.371944 2591 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 03:51:56.372116 kubelet[2591]: I1216 03:51:56.372093 2591 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:51:56.372227 kubelet[2591]: I1216 03:51:56.372208 2591 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 03:51:56.372425 kubelet[2591]: E1216 03:51:56.372390 2591 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:51:56.374000 audit[2617]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:56.374000 audit[2617]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff0b1ab910 a2=0 a3=0 items=0 ppid=2591 pid=2617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:56.374000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:51:56.377417 kubelet[2591]: E1216 03:51:56.377370 2591 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.36.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.36.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 03:51:56.378000 audit[2620]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2620 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:56.378000 audit[2620]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff47638f40 a2=0 a3=0 items=0 ppid=2591 pid=2620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:56.378000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:51:56.379000 audit[2623]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2623 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:56.379000 audit[2623]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcaa67ae00 a2=0 a3=0 items=0 ppid=2591 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:56.379000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:51:56.384000 audit[2624]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2624 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:56.384000 audit[2624]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8e3be570 a2=0 a3=0 items=0 ppid=2591 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:56.384000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:51:56.384000 audit[2625]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2625 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:51:56.384000 audit[2625]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe01f2ef60 a2=0 a3=0 items=0 ppid=2591 pid=2625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:56.384000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:51:56.386000 audit[2626]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2626 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:51:56.386000 audit[2626]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc37b99bf0 a2=0 a3=0 items=0 ppid=2591 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:56.386000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:51:56.396834 kubelet[2591]: I1216 03:51:56.396806 2591 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:51:56.397456 kubelet[2591]: I1216 03:51:56.397043 2591 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:51:56.397456 kubelet[2591]: I1216 03:51:56.397082 2591 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:51:56.403212 kubelet[2591]: I1216 03:51:56.403189 2591 policy_none.go:49] "None policy: Start" Dec 16 03:51:56.403383 kubelet[2591]: I1216 03:51:56.403351 2591 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:51:56.403510 kubelet[2591]: I1216 03:51:56.403492 2591 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:51:56.421338 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 03:51:56.435780 kubelet[2591]: E1216 03:51:56.435749 2591 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-n64tt.gb1.brightbox.com\" not found" Dec 16 03:51:56.436475 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 03:51:56.443464 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 03:51:56.453106 kubelet[2591]: E1216 03:51:56.452087 2591 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 03:51:56.453106 kubelet[2591]: I1216 03:51:56.452469 2591 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:51:56.453106 kubelet[2591]: I1216 03:51:56.452524 2591 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:51:56.453106 kubelet[2591]: I1216 03:51:56.452978 2591 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:51:56.457072 kubelet[2591]: E1216 03:51:56.457045 2591 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:51:56.457599 kubelet[2591]: E1216 03:51:56.457576 2591 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-n64tt.gb1.brightbox.com\" not found" Dec 16 03:51:56.493229 systemd[1]: Created slice kubepods-burstable-poddaa5d52a2b3e91625194290f9a0da3cf.slice - libcontainer container kubepods-burstable-poddaa5d52a2b3e91625194290f9a0da3cf.slice. Dec 16 03:51:56.503741 kubelet[2591]: E1216 03:51:56.502017 2591 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n64tt.gb1.brightbox.com\" not found" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.508696 systemd[1]: Created slice kubepods-burstable-pod1313a7ea4459a30004b12246d297b199.slice - libcontainer container kubepods-burstable-pod1313a7ea4459a30004b12246d297b199.slice. Dec 16 03:51:56.513006 kubelet[2591]: E1216 03:51:56.512969 2591 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n64tt.gb1.brightbox.com\" not found" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.528075 systemd[1]: Created slice kubepods-burstable-podddc3d51211fbc13af9a4dae25d1640c3.slice - libcontainer container kubepods-burstable-podddc3d51211fbc13af9a4dae25d1640c3.slice. Dec 16 03:51:56.531436 kubelet[2591]: E1216 03:51:56.531393 2591 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n64tt.gb1.brightbox.com\" not found" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.539121 kubelet[2591]: I1216 03:51:56.539048 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddc3d51211fbc13af9a4dae25d1640c3-ca-certs\") pod \"kube-controller-manager-srv-n64tt.gb1.brightbox.com\" (UID: \"ddc3d51211fbc13af9a4dae25d1640c3\") " pod="kube-system/kube-controller-manager-srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.539249 kubelet[2591]: I1216 03:51:56.539143 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddc3d51211fbc13af9a4dae25d1640c3-k8s-certs\") pod \"kube-controller-manager-srv-n64tt.gb1.brightbox.com\" (UID: \"ddc3d51211fbc13af9a4dae25d1640c3\") " pod="kube-system/kube-controller-manager-srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.539324 kubelet[2591]: I1216 03:51:56.539286 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ddc3d51211fbc13af9a4dae25d1640c3-kubeconfig\") pod \"kube-controller-manager-srv-n64tt.gb1.brightbox.com\" (UID: \"ddc3d51211fbc13af9a4dae25d1640c3\") " pod="kube-system/kube-controller-manager-srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.539413 kubelet[2591]: I1216 03:51:56.539362 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddc3d51211fbc13af9a4dae25d1640c3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-n64tt.gb1.brightbox.com\" (UID: \"ddc3d51211fbc13af9a4dae25d1640c3\") " pod="kube-system/kube-controller-manager-srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.539510 kubelet[2591]: I1216 03:51:56.539466 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/daa5d52a2b3e91625194290f9a0da3cf-kubeconfig\") pod \"kube-scheduler-srv-n64tt.gb1.brightbox.com\" (UID: \"daa5d52a2b3e91625194290f9a0da3cf\") " pod="kube-system/kube-scheduler-srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.539567 kubelet[2591]: I1216 03:51:56.539549 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1313a7ea4459a30004b12246d297b199-k8s-certs\") pod \"kube-apiserver-srv-n64tt.gb1.brightbox.com\" (UID: \"1313a7ea4459a30004b12246d297b199\") " pod="kube-system/kube-apiserver-srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.539728 kubelet[2591]: I1216 03:51:56.539652 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ddc3d51211fbc13af9a4dae25d1640c3-flexvolume-dir\") pod \"kube-controller-manager-srv-n64tt.gb1.brightbox.com\" (UID: \"ddc3d51211fbc13af9a4dae25d1640c3\") " pod="kube-system/kube-controller-manager-srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.539807 kubelet[2591]: I1216 03:51:56.539767 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1313a7ea4459a30004b12246d297b199-ca-certs\") pod \"kube-apiserver-srv-n64tt.gb1.brightbox.com\" (UID: \"1313a7ea4459a30004b12246d297b199\") " pod="kube-system/kube-apiserver-srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.539879 kubelet[2591]: I1216 03:51:56.539821 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1313a7ea4459a30004b12246d297b199-usr-share-ca-certificates\") pod \"kube-apiserver-srv-n64tt.gb1.brightbox.com\" (UID: \"1313a7ea4459a30004b12246d297b199\") " pod="kube-system/kube-apiserver-srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.541445 kubelet[2591]: E1216 03:51:56.541398 2591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.36.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-n64tt.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.36.234:6443: connect: connection refused" interval="400ms" Dec 16 03:51:56.555559 kubelet[2591]: I1216 03:51:56.555461 2591 kubelet_node_status.go:75] "Attempting to register node" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.557990 kubelet[2591]: E1216 03:51:56.557926 2591 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.36.234:6443/api/v1/nodes\": dial tcp 10.230.36.234:6443: connect: connection refused" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.761329 kubelet[2591]: I1216 03:51:56.761141 2591 kubelet_node_status.go:75] "Attempting to register node" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.762045 kubelet[2591]: E1216 03:51:56.761936 2591 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.36.234:6443/api/v1/nodes\": dial tcp 10.230.36.234:6443: connect: connection refused" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:56.805462 containerd[1640]: time="2025-12-16T03:51:56.805414025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-n64tt.gb1.brightbox.com,Uid:daa5d52a2b3e91625194290f9a0da3cf,Namespace:kube-system,Attempt:0,}" Dec 16 03:51:56.813966 containerd[1640]: time="2025-12-16T03:51:56.813922731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-n64tt.gb1.brightbox.com,Uid:1313a7ea4459a30004b12246d297b199,Namespace:kube-system,Attempt:0,}" Dec 16 03:51:56.835314 containerd[1640]: time="2025-12-16T03:51:56.834107907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-n64tt.gb1.brightbox.com,Uid:ddc3d51211fbc13af9a4dae25d1640c3,Namespace:kube-system,Attempt:0,}" Dec 16 03:51:56.871535 kubelet[2591]: E1216 03:51:56.871369 2591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.36.234:6443/api/v1/namespaces/default/events\": dial tcp 10.230.36.234:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-n64tt.gb1.brightbox.com.188195b0fc1f9c05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-n64tt.gb1.brightbox.com,UID:srv-n64tt.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-n64tt.gb1.brightbox.com,},FirstTimestamp:2025-12-16 03:51:56.282657797 +0000 UTC m=+1.321652062,LastTimestamp:2025-12-16 03:51:56.282657797 +0000 UTC m=+1.321652062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-n64tt.gb1.brightbox.com,}" Dec 16 03:51:56.943694 kubelet[2591]: E1216 03:51:56.942471 2591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.36.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-n64tt.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.36.234:6443: connect: connection refused" interval="800ms" Dec 16 03:51:56.980743 containerd[1640]: time="2025-12-16T03:51:56.980433174Z" level=info msg="connecting to shim 2206a254601f24c8c00914e0f99f17a91202aaf3ad3c497fa6c44f375e1b7067" address="unix:///run/containerd/s/751da365b2bf1c99b17df7b0c6dcae7b76de3b102c7fd54d70237ecdc2b411c6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:51:56.983949 containerd[1640]: time="2025-12-16T03:51:56.983889685Z" level=info msg="connecting to shim 3463c1b394061fcaf6f891e715c42b794c52d0e47f55552c1d785059237d318a" address="unix:///run/containerd/s/5530e9685ddb36455c94b0e2ddc439c1062f4170a748350da5b338f84f593536" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:51:56.986774 containerd[1640]: time="2025-12-16T03:51:56.986693994Z" level=info msg="connecting to shim 65d12e1f3fa5680859e226ac4b05409771b7ddb75fbb19bed500056b33c4538e" address="unix:///run/containerd/s/26735754a897dc3297e9f473468d5690de1ba26e04bb19dbce22802e34eca62c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:51:57.117035 systemd[1]: Started cri-containerd-2206a254601f24c8c00914e0f99f17a91202aaf3ad3c497fa6c44f375e1b7067.scope - libcontainer container 2206a254601f24c8c00914e0f99f17a91202aaf3ad3c497fa6c44f375e1b7067. Dec 16 03:51:57.125986 systemd[1]: Started cri-containerd-3463c1b394061fcaf6f891e715c42b794c52d0e47f55552c1d785059237d318a.scope - libcontainer container 3463c1b394061fcaf6f891e715c42b794c52d0e47f55552c1d785059237d318a. Dec 16 03:51:57.128937 systemd[1]: Started cri-containerd-65d12e1f3fa5680859e226ac4b05409771b7ddb75fbb19bed500056b33c4538e.scope - libcontainer container 65d12e1f3fa5680859e226ac4b05409771b7ddb75fbb19bed500056b33c4538e. Dec 16 03:51:57.165000 audit: BPF prog-id=86 op=LOAD Dec 16 03:51:57.166000 audit: BPF prog-id=87 op=LOAD Dec 16 03:51:57.166000 audit[2682]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2651 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303661323534363031663234633863303039313465306639396631 Dec 16 03:51:57.166000 audit: BPF prog-id=87 op=UNLOAD Dec 16 03:51:57.166000 audit[2682]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2651 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303661323534363031663234633863303039313465306639396631 Dec 16 03:51:57.173000 audit: BPF prog-id=88 op=LOAD Dec 16 03:51:57.173000 audit[2682]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2651 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303661323534363031663234633863303039313465306639396631 Dec 16 03:51:57.173000 audit: BPF prog-id=89 op=LOAD Dec 16 03:51:57.173000 audit[2682]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2651 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303661323534363031663234633863303039313465306639396631 Dec 16 03:51:57.173000 audit: BPF prog-id=89 op=UNLOAD Dec 16 03:51:57.173000 audit[2682]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2651 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303661323534363031663234633863303039313465306639396631 Dec 16 03:51:57.173000 audit: BPF prog-id=88 op=UNLOAD Dec 16 03:51:57.173000 audit[2682]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2651 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303661323534363031663234633863303039313465306639396631 Dec 16 03:51:57.173000 audit: BPF prog-id=90 op=LOAD Dec 16 03:51:57.173000 audit[2682]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2651 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232303661323534363031663234633863303039313465306639396631 Dec 16 03:51:57.175000 audit: BPF prog-id=91 op=LOAD Dec 16 03:51:57.177000 audit: BPF prog-id=92 op=LOAD Dec 16 03:51:57.178000 audit: BPF prog-id=93 op=LOAD Dec 16 03:51:57.178000 audit[2689]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2655 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363363316233393430363166636166366638393165373135633432 Dec 16 03:51:57.178000 audit: BPF prog-id=93 op=UNLOAD Dec 16 03:51:57.178000 audit[2689]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2655 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363363316233393430363166636166366638393165373135633432 Dec 16 03:51:57.180259 kubelet[2591]: I1216 03:51:57.176617 2591 kubelet_node_status.go:75] "Attempting to register node" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:57.180259 kubelet[2591]: E1216 03:51:57.179278 2591 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.36.234:6443/api/v1/nodes\": dial tcp 10.230.36.234:6443: connect: connection refused" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:57.180000 audit: BPF prog-id=94 op=LOAD Dec 16 03:51:57.180000 audit[2685]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2662 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635643132653166336661353638303835396532323661633462303534 Dec 16 03:51:57.180000 audit: BPF prog-id=94 op=UNLOAD Dec 16 03:51:57.180000 audit[2685]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2662 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635643132653166336661353638303835396532323661633462303534 Dec 16 03:51:57.181000 audit: BPF prog-id=95 op=LOAD Dec 16 03:51:57.181000 audit[2689]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2655 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363363316233393430363166636166366638393165373135633432 Dec 16 03:51:57.181000 audit: BPF prog-id=96 op=LOAD Dec 16 03:51:57.181000 audit[2689]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2655 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363363316233393430363166636166366638393165373135633432 Dec 16 03:51:57.181000 audit: BPF prog-id=96 op=UNLOAD Dec 16 03:51:57.181000 audit[2689]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2655 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363363316233393430363166636166366638393165373135633432 Dec 16 03:51:57.182000 audit: BPF prog-id=95 op=UNLOAD Dec 16 03:51:57.182000 audit[2689]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2655 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363363316233393430363166636166366638393165373135633432 Dec 16 03:51:57.182000 audit: BPF prog-id=97 op=LOAD Dec 16 03:51:57.183000 audit: BPF prog-id=98 op=LOAD Dec 16 03:51:57.183000 audit[2685]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2662 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635643132653166336661353638303835396532323661633462303534 Dec 16 03:51:57.183000 audit: BPF prog-id=99 op=LOAD Dec 16 03:51:57.183000 audit[2685]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2662 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635643132653166336661353638303835396532323661633462303534 Dec 16 03:51:57.182000 audit[2689]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2655 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334363363316233393430363166636166366638393165373135633432 Dec 16 03:51:57.184000 audit: BPF prog-id=99 op=UNLOAD Dec 16 03:51:57.184000 audit[2685]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2662 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635643132653166336661353638303835396532323661633462303534 Dec 16 03:51:57.184000 audit: BPF prog-id=98 op=UNLOAD Dec 16 03:51:57.184000 audit[2685]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2662 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635643132653166336661353638303835396532323661633462303534 Dec 16 03:51:57.184000 audit: BPF prog-id=100 op=LOAD Dec 16 03:51:57.184000 audit[2685]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2662 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635643132653166336661353638303835396532323661633462303534 Dec 16 03:51:57.279678 containerd[1640]: time="2025-12-16T03:51:57.279549976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-n64tt.gb1.brightbox.com,Uid:daa5d52a2b3e91625194290f9a0da3cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"2206a254601f24c8c00914e0f99f17a91202aaf3ad3c497fa6c44f375e1b7067\"" Dec 16 03:51:57.284895 containerd[1640]: time="2025-12-16T03:51:57.284400230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-n64tt.gb1.brightbox.com,Uid:1313a7ea4459a30004b12246d297b199,Namespace:kube-system,Attempt:0,} returns sandbox id \"3463c1b394061fcaf6f891e715c42b794c52d0e47f55552c1d785059237d318a\"" Dec 16 03:51:57.288364 containerd[1640]: time="2025-12-16T03:51:57.288330810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-n64tt.gb1.brightbox.com,Uid:ddc3d51211fbc13af9a4dae25d1640c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"65d12e1f3fa5680859e226ac4b05409771b7ddb75fbb19bed500056b33c4538e\"" Dec 16 03:51:57.290753 containerd[1640]: time="2025-12-16T03:51:57.290700823Z" level=info msg="CreateContainer within sandbox \"2206a254601f24c8c00914e0f99f17a91202aaf3ad3c497fa6c44f375e1b7067\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 03:51:57.294898 containerd[1640]: time="2025-12-16T03:51:57.294842207Z" level=info msg="CreateContainer within sandbox \"3463c1b394061fcaf6f891e715c42b794c52d0e47f55552c1d785059237d318a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 03:51:57.296931 containerd[1640]: time="2025-12-16T03:51:57.296891133Z" level=info msg="CreateContainer within sandbox \"65d12e1f3fa5680859e226ac4b05409771b7ddb75fbb19bed500056b33c4538e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 03:51:57.328303 containerd[1640]: time="2025-12-16T03:51:57.327783007Z" level=info msg="Container 2c2e99a808de66c861383d138109614e95c0da0a8081b67956cbebf56aac63e8: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:51:57.329850 containerd[1640]: time="2025-12-16T03:51:57.329819457Z" level=info msg="Container 798b443dc39336f25fa07389b2353e7b32ad8510ae9afb5a17cc86e697e06b48: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:51:57.333586 containerd[1640]: time="2025-12-16T03:51:57.333491270Z" level=info msg="Container 0bd8cc5695444ea04070f843ca432a172b250c912a625835e1e2b0918bdd43e3: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:51:57.337686 containerd[1640]: time="2025-12-16T03:51:57.337652026Z" level=info msg="CreateContainer within sandbox \"3463c1b394061fcaf6f891e715c42b794c52d0e47f55552c1d785059237d318a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2c2e99a808de66c861383d138109614e95c0da0a8081b67956cbebf56aac63e8\"" Dec 16 03:51:57.339213 containerd[1640]: time="2025-12-16T03:51:57.339180344Z" level=info msg="StartContainer for \"2c2e99a808de66c861383d138109614e95c0da0a8081b67956cbebf56aac63e8\"" Dec 16 03:51:57.342394 containerd[1640]: time="2025-12-16T03:51:57.342359980Z" level=info msg="CreateContainer within sandbox \"65d12e1f3fa5680859e226ac4b05409771b7ddb75fbb19bed500056b33c4538e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0bd8cc5695444ea04070f843ca432a172b250c912a625835e1e2b0918bdd43e3\"" Dec 16 03:51:57.342921 containerd[1640]: time="2025-12-16T03:51:57.342799222Z" level=info msg="StartContainer for \"0bd8cc5695444ea04070f843ca432a172b250c912a625835e1e2b0918bdd43e3\"" Dec 16 03:51:57.343557 containerd[1640]: time="2025-12-16T03:51:57.343498739Z" level=info msg="connecting to shim 2c2e99a808de66c861383d138109614e95c0da0a8081b67956cbebf56aac63e8" address="unix:///run/containerd/s/5530e9685ddb36455c94b0e2ddc439c1062f4170a748350da5b338f84f593536" protocol=ttrpc version=3 Dec 16 03:51:57.349205 containerd[1640]: time="2025-12-16T03:51:57.348890491Z" level=info msg="CreateContainer within sandbox \"2206a254601f24c8c00914e0f99f17a91202aaf3ad3c497fa6c44f375e1b7067\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"798b443dc39336f25fa07389b2353e7b32ad8510ae9afb5a17cc86e697e06b48\"" Dec 16 03:51:57.349205 containerd[1640]: time="2025-12-16T03:51:57.349052035Z" level=info msg="connecting to shim 0bd8cc5695444ea04070f843ca432a172b250c912a625835e1e2b0918bdd43e3" address="unix:///run/containerd/s/26735754a897dc3297e9f473468d5690de1ba26e04bb19dbce22802e34eca62c" protocol=ttrpc version=3 Dec 16 03:51:57.350230 containerd[1640]: time="2025-12-16T03:51:57.350185705Z" level=info msg="StartContainer for \"798b443dc39336f25fa07389b2353e7b32ad8510ae9afb5a17cc86e697e06b48\"" Dec 16 03:51:57.356982 containerd[1640]: time="2025-12-16T03:51:57.356944359Z" level=info msg="connecting to shim 798b443dc39336f25fa07389b2353e7b32ad8510ae9afb5a17cc86e697e06b48" address="unix:///run/containerd/s/751da365b2bf1c99b17df7b0c6dcae7b76de3b102c7fd54d70237ecdc2b411c6" protocol=ttrpc version=3 Dec 16 03:51:57.392113 systemd[1]: Started cri-containerd-798b443dc39336f25fa07389b2353e7b32ad8510ae9afb5a17cc86e697e06b48.scope - libcontainer container 798b443dc39336f25fa07389b2353e7b32ad8510ae9afb5a17cc86e697e06b48. Dec 16 03:51:57.403993 systemd[1]: Started cri-containerd-0bd8cc5695444ea04070f843ca432a172b250c912a625835e1e2b0918bdd43e3.scope - libcontainer container 0bd8cc5695444ea04070f843ca432a172b250c912a625835e1e2b0918bdd43e3. Dec 16 03:51:57.416980 systemd[1]: Started cri-containerd-2c2e99a808de66c861383d138109614e95c0da0a8081b67956cbebf56aac63e8.scope - libcontainer container 2c2e99a808de66c861383d138109614e95c0da0a8081b67956cbebf56aac63e8. Dec 16 03:51:57.438000 audit: BPF prog-id=101 op=LOAD Dec 16 03:51:57.441000 audit: BPF prog-id=102 op=LOAD Dec 16 03:51:57.441000 audit[2771]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2651 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739386234343364633339333336663235666130373338396232333533 Dec 16 03:51:57.441000 audit: BPF prog-id=102 op=UNLOAD Dec 16 03:51:57.441000 audit[2771]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2651 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739386234343364633339333336663235666130373338396232333533 Dec 16 03:51:57.441000 audit: BPF prog-id=103 op=LOAD Dec 16 03:51:57.441000 audit[2771]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2651 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739386234343364633339333336663235666130373338396232333533 Dec 16 03:51:57.441000 audit: BPF prog-id=104 op=LOAD Dec 16 03:51:57.441000 audit[2771]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2651 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739386234343364633339333336663235666130373338396232333533 Dec 16 03:51:57.441000 audit: BPF prog-id=104 op=UNLOAD Dec 16 03:51:57.441000 audit[2771]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2651 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739386234343364633339333336663235666130373338396232333533 Dec 16 03:51:57.441000 audit: BPF prog-id=103 op=UNLOAD Dec 16 03:51:57.441000 audit[2771]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2651 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739386234343364633339333336663235666130373338396232333533 Dec 16 03:51:57.441000 audit: BPF prog-id=105 op=LOAD Dec 16 03:51:57.441000 audit[2771]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2651 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739386234343364633339333336663235666130373338396232333533 Dec 16 03:51:57.445922 kubelet[2591]: E1216 03:51:57.445769 2591 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.36.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.36.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 03:51:57.448795 kubelet[2591]: E1216 03:51:57.448477 2591 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.36.234:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.36.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 03:51:57.457000 audit: BPF prog-id=106 op=LOAD Dec 16 03:51:57.458000 audit: BPF prog-id=107 op=LOAD Dec 16 03:51:57.458000 audit[2770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2662 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643863633536393534343465613034303730663834336361343332 Dec 16 03:51:57.458000 audit: BPF prog-id=107 op=UNLOAD Dec 16 03:51:57.458000 audit[2770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2662 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643863633536393534343465613034303730663834336361343332 Dec 16 03:51:57.459000 audit: BPF prog-id=108 op=LOAD Dec 16 03:51:57.459000 audit[2770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2662 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643863633536393534343465613034303730663834336361343332 Dec 16 03:51:57.460000 audit: BPF prog-id=109 op=LOAD Dec 16 03:51:57.460000 audit[2770]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2662 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.460000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643863633536393534343465613034303730663834336361343332 Dec 16 03:51:57.460000 audit: BPF prog-id=109 op=UNLOAD Dec 16 03:51:57.460000 audit[2770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2662 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.460000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643863633536393534343465613034303730663834336361343332 Dec 16 03:51:57.461000 audit: BPF prog-id=108 op=UNLOAD Dec 16 03:51:57.461000 audit[2770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2662 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643863633536393534343465613034303730663834336361343332 Dec 16 03:51:57.461000 audit: BPF prog-id=110 op=LOAD Dec 16 03:51:57.461000 audit[2770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2662 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062643863633536393534343465613034303730663834336361343332 Dec 16 03:51:57.465000 audit: BPF prog-id=111 op=LOAD Dec 16 03:51:57.466000 audit: BPF prog-id=112 op=LOAD Dec 16 03:51:57.466000 audit[2769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2655 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263326539396138303864653636633836313338336431333831303936 Dec 16 03:51:57.466000 audit: BPF prog-id=112 op=UNLOAD Dec 16 03:51:57.466000 audit[2769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2655 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263326539396138303864653636633836313338336431333831303936 Dec 16 03:51:57.466000 audit: BPF prog-id=113 op=LOAD Dec 16 03:51:57.466000 audit[2769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2655 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263326539396138303864653636633836313338336431333831303936 Dec 16 03:51:57.467000 audit: BPF prog-id=114 op=LOAD Dec 16 03:51:57.467000 audit[2769]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2655 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263326539396138303864653636633836313338336431333831303936 Dec 16 03:51:57.467000 audit: BPF prog-id=114 op=UNLOAD Dec 16 03:51:57.467000 audit[2769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2655 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263326539396138303864653636633836313338336431333831303936 Dec 16 03:51:57.467000 audit: BPF prog-id=113 op=UNLOAD Dec 16 03:51:57.467000 audit[2769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2655 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263326539396138303864653636633836313338336431333831303936 Dec 16 03:51:57.468000 audit: BPF prog-id=115 op=LOAD Dec 16 03:51:57.468000 audit[2769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2655 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:51:57.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263326539396138303864653636633836313338336431333831303936 Dec 16 03:51:57.536544 containerd[1640]: time="2025-12-16T03:51:57.536424453Z" level=info msg="StartContainer for \"2c2e99a808de66c861383d138109614e95c0da0a8081b67956cbebf56aac63e8\" returns successfully" Dec 16 03:51:57.572899 containerd[1640]: time="2025-12-16T03:51:57.572759175Z" level=info msg="StartContainer for \"0bd8cc5695444ea04070f843ca432a172b250c912a625835e1e2b0918bdd43e3\" returns successfully" Dec 16 03:51:57.575029 containerd[1640]: time="2025-12-16T03:51:57.574993282Z" level=info msg="StartContainer for \"798b443dc39336f25fa07389b2353e7b32ad8510ae9afb5a17cc86e697e06b48\" returns successfully" Dec 16 03:51:57.601927 kubelet[2591]: E1216 03:51:57.601858 2591 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.36.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-n64tt.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.36.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 03:51:57.744472 kubelet[2591]: E1216 03:51:57.744310 2591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.36.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-n64tt.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.36.234:6443: connect: connection refused" interval="1.6s" Dec 16 03:51:57.932355 kubelet[2591]: E1216 03:51:57.932301 2591 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.36.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.36.234:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 03:51:57.983906 kubelet[2591]: I1216 03:51:57.983809 2591 kubelet_node_status.go:75] "Attempting to register node" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:57.984548 kubelet[2591]: E1216 03:51:57.984514 2591 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.36.234:6443/api/v1/nodes\": dial tcp 10.230.36.234:6443: connect: connection refused" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:58.435541 kubelet[2591]: E1216 03:51:58.435027 2591 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n64tt.gb1.brightbox.com\" not found" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:58.438351 kubelet[2591]: E1216 03:51:58.438313 2591 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n64tt.gb1.brightbox.com\" not found" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:58.443044 kubelet[2591]: E1216 03:51:58.442828 2591 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n64tt.gb1.brightbox.com\" not found" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:59.447337 kubelet[2591]: E1216 03:51:59.447290 2591 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n64tt.gb1.brightbox.com\" not found" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:59.447970 kubelet[2591]: E1216 03:51:59.447872 2591 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n64tt.gb1.brightbox.com\" not found" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:59.448486 kubelet[2591]: E1216 03:51:59.448461 2591 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n64tt.gb1.brightbox.com\" not found" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:51:59.589135 kubelet[2591]: I1216 03:51:59.588394 2591 kubelet_node_status.go:75] "Attempting to register node" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:52:00.447784 kubelet[2591]: E1216 03:52:00.447467 2591 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n64tt.gb1.brightbox.com\" not found" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:52:00.450276 kubelet[2591]: E1216 03:52:00.449991 2591 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n64tt.gb1.brightbox.com\" not found" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:52:00.880566 kubelet[2591]: E1216 03:52:00.880511 2591 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-n64tt.gb1.brightbox.com\" not found" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:52:01.043791 kubelet[2591]: I1216 03:52:01.043113 2591 kubelet_node_status.go:78] "Successfully registered node" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:52:01.137562 kubelet[2591]: I1216 03:52:01.137332 2591 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:01.173825 kubelet[2591]: E1216 03:52:01.173095 2591 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-n64tt.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:01.173825 kubelet[2591]: I1216 03:52:01.173150 2591 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:01.178974 kubelet[2591]: E1216 03:52:01.178943 2591 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-n64tt.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:01.180830 kubelet[2591]: I1216 03:52:01.179101 2591 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:01.183445 kubelet[2591]: E1216 03:52:01.183395 2591 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-n64tt.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:01.274567 kubelet[2591]: I1216 03:52:01.274279 2591 apiserver.go:52] "Watching apiserver" Dec 16 03:52:01.338902 kubelet[2591]: I1216 03:52:01.338850 2591 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:52:01.831575 kubelet[2591]: I1216 03:52:01.831251 2591 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:01.839241 kubelet[2591]: I1216 03:52:01.838929 2591 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:52:03.042200 systemd[1]: Reload requested from client PID 2873 ('systemctl') (unit session-12.scope)... Dec 16 03:52:03.042797 systemd[1]: Reloading... Dec 16 03:52:03.182798 zram_generator::config[2921]: No configuration found. Dec 16 03:52:03.565815 systemd[1]: Reloading finished in 522 ms. Dec 16 03:52:03.603601 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:52:03.623314 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 03:52:03.623806 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:52:03.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:52:03.627483 kernel: kauditd_printk_skb: 158 callbacks suppressed Dec 16 03:52:03.627600 kernel: audit: type=1131 audit(1765857123.623:397): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:52:03.632059 systemd[1]: kubelet.service: Consumed 1.853s CPU time, 128.5M memory peak. Dec 16 03:52:03.638075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:52:03.638000 audit: BPF prog-id=116 op=LOAD Dec 16 03:52:03.641739 kernel: audit: type=1334 audit(1765857123.638:398): prog-id=116 op=LOAD Dec 16 03:52:03.638000 audit: BPF prog-id=69 op=UNLOAD Dec 16 03:52:03.644756 kernel: audit: type=1334 audit(1765857123.638:399): prog-id=69 op=UNLOAD Dec 16 03:52:03.639000 audit: BPF prog-id=117 op=LOAD Dec 16 03:52:03.647737 kernel: audit: type=1334 audit(1765857123.639:400): prog-id=117 op=LOAD Dec 16 03:52:03.651544 kernel: audit: type=1334 audit(1765857123.639:401): prog-id=118 op=LOAD Dec 16 03:52:03.651633 kernel: audit: type=1334 audit(1765857123.639:402): prog-id=70 op=UNLOAD Dec 16 03:52:03.639000 audit: BPF prog-id=118 op=LOAD Dec 16 03:52:03.639000 audit: BPF prog-id=70 op=UNLOAD Dec 16 03:52:03.639000 audit: BPF prog-id=71 op=UNLOAD Dec 16 03:52:03.639000 audit: BPF prog-id=119 op=LOAD Dec 16 03:52:03.655503 kernel: audit: type=1334 audit(1765857123.639:403): prog-id=71 op=UNLOAD Dec 16 03:52:03.655619 kernel: audit: type=1334 audit(1765857123.639:404): prog-id=119 op=LOAD Dec 16 03:52:03.655677 kernel: audit: type=1334 audit(1765857123.639:405): prog-id=120 op=LOAD Dec 16 03:52:03.639000 audit: BPF prog-id=120 op=LOAD Dec 16 03:52:03.639000 audit: BPF prog-id=65 op=UNLOAD Dec 16 03:52:03.658286 kernel: audit: type=1334 audit(1765857123.639:406): prog-id=65 op=UNLOAD Dec 16 03:52:03.639000 audit: BPF prog-id=66 op=UNLOAD Dec 16 03:52:03.642000 audit: BPF prog-id=121 op=LOAD Dec 16 03:52:03.642000 audit: BPF prog-id=72 op=UNLOAD Dec 16 03:52:03.642000 audit: BPF prog-id=122 op=LOAD Dec 16 03:52:03.642000 audit: BPF prog-id=123 op=LOAD Dec 16 03:52:03.642000 audit: BPF prog-id=73 op=UNLOAD Dec 16 03:52:03.642000 audit: BPF prog-id=74 op=UNLOAD Dec 16 03:52:03.643000 audit: BPF prog-id=124 op=LOAD Dec 16 03:52:03.643000 audit: BPF prog-id=78 op=UNLOAD Dec 16 03:52:03.643000 audit: BPF prog-id=125 op=LOAD Dec 16 03:52:03.643000 audit: BPF prog-id=126 op=LOAD Dec 16 03:52:03.643000 audit: BPF prog-id=79 op=UNLOAD Dec 16 03:52:03.644000 audit: BPF prog-id=80 op=UNLOAD Dec 16 03:52:03.645000 audit: BPF prog-id=127 op=LOAD Dec 16 03:52:03.645000 audit: BPF prog-id=67 op=UNLOAD Dec 16 03:52:03.648000 audit: BPF prog-id=128 op=LOAD Dec 16 03:52:03.648000 audit: BPF prog-id=75 op=UNLOAD Dec 16 03:52:03.648000 audit: BPF prog-id=129 op=LOAD Dec 16 03:52:03.648000 audit: BPF prog-id=130 op=LOAD Dec 16 03:52:03.648000 audit: BPF prog-id=76 op=UNLOAD Dec 16 03:52:03.648000 audit: BPF prog-id=77 op=UNLOAD Dec 16 03:52:03.649000 audit: BPF prog-id=131 op=LOAD Dec 16 03:52:03.649000 audit: BPF prog-id=68 op=UNLOAD Dec 16 03:52:03.651000 audit: BPF prog-id=132 op=LOAD Dec 16 03:52:03.651000 audit: BPF prog-id=83 op=UNLOAD Dec 16 03:52:03.651000 audit: BPF prog-id=133 op=LOAD Dec 16 03:52:03.651000 audit: BPF prog-id=134 op=LOAD Dec 16 03:52:03.651000 audit: BPF prog-id=84 op=UNLOAD Dec 16 03:52:03.651000 audit: BPF prog-id=85 op=UNLOAD Dec 16 03:52:03.661000 audit: BPF prog-id=135 op=LOAD Dec 16 03:52:03.661000 audit: BPF prog-id=81 op=UNLOAD Dec 16 03:52:03.662000 audit: BPF prog-id=136 op=LOAD Dec 16 03:52:03.662000 audit: BPF prog-id=82 op=UNLOAD Dec 16 03:52:03.963976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:52:03.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:52:03.978200 (kubelet)[2985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:52:04.058648 kubelet[2985]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:52:04.059847 kubelet[2985]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:52:04.059847 kubelet[2985]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:52:04.060373 kubelet[2985]: I1216 03:52:04.060169 2985 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:52:04.074930 kubelet[2985]: I1216 03:52:04.074889 2985 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 03:52:04.075741 kubelet[2985]: I1216 03:52:04.075118 2985 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:52:04.075741 kubelet[2985]: I1216 03:52:04.075536 2985 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 03:52:04.077962 kubelet[2985]: I1216 03:52:04.077935 2985 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 03:52:04.082997 kubelet[2985]: I1216 03:52:04.082944 2985 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:52:04.093160 kubelet[2985]: I1216 03:52:04.093113 2985 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:52:04.104589 kubelet[2985]: I1216 03:52:04.104546 2985 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:52:04.105211 kubelet[2985]: I1216 03:52:04.105169 2985 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:52:04.106791 kubelet[2985]: I1216 03:52:04.105313 2985 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-n64tt.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:52:04.106791 kubelet[2985]: I1216 03:52:04.105894 2985 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:52:04.106791 kubelet[2985]: I1216 03:52:04.105912 2985 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 03:52:04.106791 kubelet[2985]: I1216 03:52:04.105983 2985 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:52:04.106791 kubelet[2985]: I1216 03:52:04.106249 2985 kubelet.go:480] "Attempting to sync node with API server" Dec 16 03:52:04.107258 kubelet[2985]: I1216 03:52:04.106269 2985 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:52:04.107258 kubelet[2985]: I1216 03:52:04.106300 2985 kubelet.go:386] "Adding apiserver pod source" Dec 16 03:52:04.107258 kubelet[2985]: I1216 03:52:04.106328 2985 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:52:04.120413 kubelet[2985]: I1216 03:52:04.120379 2985 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:52:04.123341 kubelet[2985]: I1216 03:52:04.123309 2985 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 03:52:04.132136 kubelet[2985]: I1216 03:52:04.132087 2985 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:52:04.132388 kubelet[2985]: I1216 03:52:04.132365 2985 server.go:1289] "Started kubelet" Dec 16 03:52:04.141743 kubelet[2985]: I1216 03:52:04.139471 2985 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:52:04.155363 kubelet[2985]: I1216 03:52:04.154983 2985 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:52:04.156884 kubelet[2985]: I1216 03:52:04.156190 2985 server.go:317] "Adding debug handlers to kubelet server" Dec 16 03:52:04.166341 kubelet[2985]: I1216 03:52:04.166303 2985 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 03:52:04.171245 kubelet[2985]: I1216 03:52:04.171187 2985 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:52:04.171487 kubelet[2985]: I1216 03:52:04.171459 2985 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:52:04.173549 kubelet[2985]: I1216 03:52:04.173518 2985 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:52:04.176910 kubelet[2985]: I1216 03:52:04.176884 2985 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:52:04.182299 kubelet[2985]: I1216 03:52:04.181915 2985 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:52:04.182299 kubelet[2985]: I1216 03:52:04.182086 2985 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:52:04.188553 kubelet[2985]: I1216 03:52:04.188525 2985 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 03:52:04.189863 kubelet[2985]: I1216 03:52:04.189840 2985 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 03:52:04.190335 kubelet[2985]: I1216 03:52:04.189980 2985 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:52:04.190335 kubelet[2985]: I1216 03:52:04.190001 2985 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 03:52:04.190335 kubelet[2985]: E1216 03:52:04.190066 2985 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:52:04.198582 kubelet[2985]: I1216 03:52:04.198341 2985 factory.go:223] Registration of the systemd container factory successfully Dec 16 03:52:04.199085 kubelet[2985]: I1216 03:52:04.199029 2985 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:52:04.203978 kubelet[2985]: E1216 03:52:04.203869 2985 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:52:04.204487 kubelet[2985]: I1216 03:52:04.204427 2985 factory.go:223] Registration of the containerd container factory successfully Dec 16 03:52:04.291001 kubelet[2985]: E1216 03:52:04.290172 2985 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 03:52:04.300879 kubelet[2985]: I1216 03:52:04.300764 2985 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:52:04.300879 kubelet[2985]: I1216 03:52:04.300791 2985 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:52:04.300879 kubelet[2985]: I1216 03:52:04.300818 2985 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:52:04.302211 kubelet[2985]: I1216 03:52:04.301059 2985 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 03:52:04.302211 kubelet[2985]: I1216 03:52:04.301083 2985 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 03:52:04.302211 kubelet[2985]: I1216 03:52:04.301137 2985 policy_none.go:49] "None policy: Start" Dec 16 03:52:04.302211 kubelet[2985]: I1216 03:52:04.301162 2985 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:52:04.302211 kubelet[2985]: I1216 03:52:04.301187 2985 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:52:04.302211 kubelet[2985]: I1216 03:52:04.301320 2985 state_mem.go:75] "Updated machine memory state" Dec 16 03:52:04.309902 kubelet[2985]: E1216 03:52:04.309852 2985 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 03:52:04.311517 kubelet[2985]: I1216 03:52:04.310963 2985 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:52:04.311517 kubelet[2985]: I1216 03:52:04.310990 2985 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:52:04.311517 kubelet[2985]: I1216 03:52:04.311355 2985 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:52:04.326287 kubelet[2985]: E1216 03:52:04.325018 2985 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:52:04.437308 kubelet[2985]: I1216 03:52:04.436277 2985 kubelet_node_status.go:75] "Attempting to register node" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:52:04.447073 kubelet[2985]: I1216 03:52:04.447038 2985 kubelet_node_status.go:124] "Node was previously registered" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:52:04.447195 kubelet[2985]: I1216 03:52:04.447144 2985 kubelet_node_status.go:78] "Successfully registered node" node="srv-n64tt.gb1.brightbox.com" Dec 16 03:52:04.492136 kubelet[2985]: I1216 03:52:04.491711 2985 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:04.495022 kubelet[2985]: I1216 03:52:04.494981 2985 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:04.495575 kubelet[2985]: I1216 03:52:04.495214 2985 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:04.503570 kubelet[2985]: I1216 03:52:04.502019 2985 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:52:04.504777 kubelet[2985]: I1216 03:52:04.504751 2985 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:52:04.504939 kubelet[2985]: I1216 03:52:04.504911 2985 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:52:04.505163 kubelet[2985]: E1216 03:52:04.505134 2985 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-n64tt.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:04.584896 kubelet[2985]: I1216 03:52:04.584334 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1313a7ea4459a30004b12246d297b199-usr-share-ca-certificates\") pod \"kube-apiserver-srv-n64tt.gb1.brightbox.com\" (UID: \"1313a7ea4459a30004b12246d297b199\") " pod="kube-system/kube-apiserver-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:04.584896 kubelet[2985]: I1216 03:52:04.584384 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddc3d51211fbc13af9a4dae25d1640c3-k8s-certs\") pod \"kube-controller-manager-srv-n64tt.gb1.brightbox.com\" (UID: \"ddc3d51211fbc13af9a4dae25d1640c3\") " pod="kube-system/kube-controller-manager-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:04.584896 kubelet[2985]: I1216 03:52:04.584424 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ddc3d51211fbc13af9a4dae25d1640c3-kubeconfig\") pod \"kube-controller-manager-srv-n64tt.gb1.brightbox.com\" (UID: \"ddc3d51211fbc13af9a4dae25d1640c3\") " pod="kube-system/kube-controller-manager-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:04.584896 kubelet[2985]: I1216 03:52:04.584452 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddc3d51211fbc13af9a4dae25d1640c3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-n64tt.gb1.brightbox.com\" (UID: \"ddc3d51211fbc13af9a4dae25d1640c3\") " pod="kube-system/kube-controller-manager-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:04.584896 kubelet[2985]: I1216 03:52:04.584488 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1313a7ea4459a30004b12246d297b199-ca-certs\") pod \"kube-apiserver-srv-n64tt.gb1.brightbox.com\" (UID: \"1313a7ea4459a30004b12246d297b199\") " pod="kube-system/kube-apiserver-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:04.585307 kubelet[2985]: I1216 03:52:04.584515 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1313a7ea4459a30004b12246d297b199-k8s-certs\") pod \"kube-apiserver-srv-n64tt.gb1.brightbox.com\" (UID: \"1313a7ea4459a30004b12246d297b199\") " pod="kube-system/kube-apiserver-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:04.585307 kubelet[2985]: I1216 03:52:04.584541 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddc3d51211fbc13af9a4dae25d1640c3-ca-certs\") pod \"kube-controller-manager-srv-n64tt.gb1.brightbox.com\" (UID: \"ddc3d51211fbc13af9a4dae25d1640c3\") " pod="kube-system/kube-controller-manager-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:04.585307 kubelet[2985]: I1216 03:52:04.584570 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ddc3d51211fbc13af9a4dae25d1640c3-flexvolume-dir\") pod \"kube-controller-manager-srv-n64tt.gb1.brightbox.com\" (UID: \"ddc3d51211fbc13af9a4dae25d1640c3\") " pod="kube-system/kube-controller-manager-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:04.585307 kubelet[2985]: I1216 03:52:04.584598 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/daa5d52a2b3e91625194290f9a0da3cf-kubeconfig\") pod \"kube-scheduler-srv-n64tt.gb1.brightbox.com\" (UID: \"daa5d52a2b3e91625194290f9a0da3cf\") " pod="kube-system/kube-scheduler-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:05.117008 kubelet[2985]: I1216 03:52:05.116948 2985 apiserver.go:52] "Watching apiserver" Dec 16 03:52:05.182332 kubelet[2985]: I1216 03:52:05.182238 2985 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:52:05.270685 kubelet[2985]: I1216 03:52:05.270635 2985 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:05.293910 kubelet[2985]: I1216 03:52:05.293863 2985 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:52:05.295152 kubelet[2985]: E1216 03:52:05.294977 2985 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-n64tt.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-n64tt.gb1.brightbox.com" Dec 16 03:52:05.375737 kubelet[2985]: I1216 03:52:05.375373 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-n64tt.gb1.brightbox.com" podStartSLOduration=1.375336971 podStartE2EDuration="1.375336971s" podCreationTimestamp="2025-12-16 03:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:52:05.353271818 +0000 UTC m=+1.367898866" watchObservedRunningTime="2025-12-16 03:52:05.375336971 +0000 UTC m=+1.389963986" Dec 16 03:52:05.375737 kubelet[2985]: I1216 03:52:05.375515 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-n64tt.gb1.brightbox.com" podStartSLOduration=4.375505756 podStartE2EDuration="4.375505756s" podCreationTimestamp="2025-12-16 03:52:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:52:05.374339811 +0000 UTC m=+1.388966855" watchObservedRunningTime="2025-12-16 03:52:05.375505756 +0000 UTC m=+1.390132777" Dec 16 03:52:05.406821 kubelet[2985]: I1216 03:52:05.406619 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-n64tt.gb1.brightbox.com" podStartSLOduration=1.406600589 podStartE2EDuration="1.406600589s" podCreationTimestamp="2025-12-16 03:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:52:05.390852801 +0000 UTC m=+1.405479836" watchObservedRunningTime="2025-12-16 03:52:05.406600589 +0000 UTC m=+1.421227617" Dec 16 03:52:09.997546 kubelet[2985]: I1216 03:52:09.997367 2985 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 03:52:09.999330 kubelet[2985]: I1216 03:52:09.998920 2985 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 03:52:09.999516 containerd[1640]: time="2025-12-16T03:52:09.998216404Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 03:52:11.261230 systemd[1]: Created slice kubepods-besteffort-pod66ddcc37_6340_48a7_af55_b64534788d29.slice - libcontainer container kubepods-besteffort-pod66ddcc37_6340_48a7_af55_b64534788d29.slice. Dec 16 03:52:11.289008 systemd[1]: Created slice kubepods-besteffort-poddc9166da_69d7_4645_a887_c693447f170b.slice - libcontainer container kubepods-besteffort-poddc9166da_69d7_4645_a887_c693447f170b.slice. Dec 16 03:52:11.327070 kubelet[2985]: I1216 03:52:11.327016 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66ddcc37-6340-48a7-af55-b64534788d29-kube-proxy\") pod \"kube-proxy-mnm77\" (UID: \"66ddcc37-6340-48a7-af55-b64534788d29\") " pod="kube-system/kube-proxy-mnm77" Dec 16 03:52:11.327964 kubelet[2985]: I1216 03:52:11.327740 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dc9166da-69d7-4645-a887-c693447f170b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-t7hn8\" (UID: \"dc9166da-69d7-4645-a887-c693447f170b\") " pod="tigera-operator/tigera-operator-7dcd859c48-t7hn8" Dec 16 03:52:11.327964 kubelet[2985]: I1216 03:52:11.327788 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66ddcc37-6340-48a7-af55-b64534788d29-xtables-lock\") pod \"kube-proxy-mnm77\" (UID: \"66ddcc37-6340-48a7-af55-b64534788d29\") " pod="kube-system/kube-proxy-mnm77" Dec 16 03:52:11.327964 kubelet[2985]: I1216 03:52:11.327828 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66ddcc37-6340-48a7-af55-b64534788d29-lib-modules\") pod \"kube-proxy-mnm77\" (UID: \"66ddcc37-6340-48a7-af55-b64534788d29\") " pod="kube-system/kube-proxy-mnm77" Dec 16 03:52:11.327964 kubelet[2985]: I1216 03:52:11.327861 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68nhm\" (UniqueName: \"kubernetes.io/projected/66ddcc37-6340-48a7-af55-b64534788d29-kube-api-access-68nhm\") pod \"kube-proxy-mnm77\" (UID: \"66ddcc37-6340-48a7-af55-b64534788d29\") " pod="kube-system/kube-proxy-mnm77" Dec 16 03:52:11.327964 kubelet[2985]: I1216 03:52:11.327890 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7sdl\" (UniqueName: \"kubernetes.io/projected/dc9166da-69d7-4645-a887-c693447f170b-kube-api-access-b7sdl\") pod \"tigera-operator-7dcd859c48-t7hn8\" (UID: \"dc9166da-69d7-4645-a887-c693447f170b\") " pod="tigera-operator/tigera-operator-7dcd859c48-t7hn8" Dec 16 03:52:11.587254 containerd[1640]: time="2025-12-16T03:52:11.587197677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mnm77,Uid:66ddcc37-6340-48a7-af55-b64534788d29,Namespace:kube-system,Attempt:0,}" Dec 16 03:52:11.598212 containerd[1640]: time="2025-12-16T03:52:11.598091440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-t7hn8,Uid:dc9166da-69d7-4645-a887-c693447f170b,Namespace:tigera-operator,Attempt:0,}" Dec 16 03:52:11.630276 containerd[1640]: time="2025-12-16T03:52:11.630021168Z" level=info msg="connecting to shim 793f61bfdb2166076a168528ebe2ed4f99504b405189a92f1d388ba5bcc5b311" address="unix:///run/containerd/s/db6f9196e37be857df44069c54f9d7aef73550c529496611596894a088f30c0b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:52:11.640186 containerd[1640]: time="2025-12-16T03:52:11.640128674Z" level=info msg="connecting to shim 81845f533ad77320141073bdd6586acc2a409059b6de845cab947c02fb3c7f54" address="unix:///run/containerd/s/30ddd387b0df9638714496e5e196e946a0d49a7b72228ace67efc34d5a89a050" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:52:11.707039 systemd[1]: Started cri-containerd-793f61bfdb2166076a168528ebe2ed4f99504b405189a92f1d388ba5bcc5b311.scope - libcontainer container 793f61bfdb2166076a168528ebe2ed4f99504b405189a92f1d388ba5bcc5b311. Dec 16 03:52:11.719940 systemd[1]: Started cri-containerd-81845f533ad77320141073bdd6586acc2a409059b6de845cab947c02fb3c7f54.scope - libcontainer container 81845f533ad77320141073bdd6586acc2a409059b6de845cab947c02fb3c7f54. Dec 16 03:52:11.733000 audit: BPF prog-id=137 op=LOAD Dec 16 03:52:11.740691 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 16 03:52:11.741099 kernel: audit: type=1334 audit(1765857131.733:441): prog-id=137 op=LOAD Dec 16 03:52:11.734000 audit: BPF prog-id=138 op=LOAD Dec 16 03:52:11.734000 audit[3064]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3045 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.748359 kernel: audit: type=1334 audit(1765857131.734:442): prog-id=138 op=LOAD Dec 16 03:52:11.748442 kernel: audit: type=1300 audit(1765857131.734:442): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3045 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.754747 kernel: audit: type=1327 audit(1765857131.734:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739336636316266646232313636303736613136383532386562653265 Dec 16 03:52:11.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739336636316266646232313636303736613136383532386562653265 Dec 16 03:52:11.734000 audit: BPF prog-id=138 op=UNLOAD Dec 16 03:52:11.734000 audit[3064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3045 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.767359 kernel: audit: type=1334 audit(1765857131.734:443): prog-id=138 op=UNLOAD Dec 16 03:52:11.767447 kernel: audit: type=1300 audit(1765857131.734:443): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3045 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739336636316266646232313636303736613136383532386562653265 Dec 16 03:52:11.779762 kernel: audit: type=1327 audit(1765857131.734:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739336636316266646232313636303736613136383532386562653265 Dec 16 03:52:11.734000 audit: BPF prog-id=139 op=LOAD Dec 16 03:52:11.784753 kernel: audit: type=1334 audit(1765857131.734:444): prog-id=139 op=LOAD Dec 16 03:52:11.784820 kernel: audit: type=1300 audit(1765857131.734:444): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3045 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.734000 audit[3064]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3045 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739336636316266646232313636303736613136383532386562653265 Dec 16 03:52:11.792746 kernel: audit: type=1327 audit(1765857131.734:444): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739336636316266646232313636303736613136383532386562653265 Dec 16 03:52:11.734000 audit: BPF prog-id=140 op=LOAD Dec 16 03:52:11.734000 audit[3064]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3045 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739336636316266646232313636303736613136383532386562653265 Dec 16 03:52:11.734000 audit: BPF prog-id=140 op=UNLOAD Dec 16 03:52:11.734000 audit[3064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3045 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739336636316266646232313636303736613136383532386562653265 Dec 16 03:52:11.734000 audit: BPF prog-id=139 op=UNLOAD Dec 16 03:52:11.734000 audit[3064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3045 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739336636316266646232313636303736613136383532386562653265 Dec 16 03:52:11.734000 audit: BPF prog-id=141 op=LOAD Dec 16 03:52:11.734000 audit[3064]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3045 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739336636316266646232313636303736613136383532386562653265 Dec 16 03:52:11.759000 audit: BPF prog-id=142 op=LOAD Dec 16 03:52:11.760000 audit: BPF prog-id=143 op=LOAD Dec 16 03:52:11.760000 audit[3080]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3060 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831383435663533336164373733323031343130373362646436353836 Dec 16 03:52:11.760000 audit: BPF prog-id=143 op=UNLOAD Dec 16 03:52:11.760000 audit[3080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3060 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831383435663533336164373733323031343130373362646436353836 Dec 16 03:52:11.760000 audit: BPF prog-id=144 op=LOAD Dec 16 03:52:11.760000 audit[3080]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3060 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831383435663533336164373733323031343130373362646436353836 Dec 16 03:52:11.760000 audit: BPF prog-id=145 op=LOAD Dec 16 03:52:11.760000 audit[3080]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3060 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831383435663533336164373733323031343130373362646436353836 Dec 16 03:52:11.760000 audit: BPF prog-id=145 op=UNLOAD Dec 16 03:52:11.760000 audit[3080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3060 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831383435663533336164373733323031343130373362646436353836 Dec 16 03:52:11.760000 audit: BPF prog-id=144 op=UNLOAD Dec 16 03:52:11.760000 audit[3080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3060 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831383435663533336164373733323031343130373362646436353836 Dec 16 03:52:11.760000 audit: BPF prog-id=146 op=LOAD Dec 16 03:52:11.760000 audit[3080]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3060 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831383435663533336164373733323031343130373362646436353836 Dec 16 03:52:11.810204 containerd[1640]: time="2025-12-16T03:52:11.810151959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mnm77,Uid:66ddcc37-6340-48a7-af55-b64534788d29,Namespace:kube-system,Attempt:0,} returns sandbox id \"793f61bfdb2166076a168528ebe2ed4f99504b405189a92f1d388ba5bcc5b311\"" Dec 16 03:52:11.817340 containerd[1640]: time="2025-12-16T03:52:11.817277244Z" level=info msg="CreateContainer within sandbox \"793f61bfdb2166076a168528ebe2ed4f99504b405189a92f1d388ba5bcc5b311\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 03:52:11.832261 containerd[1640]: time="2025-12-16T03:52:11.832226901Z" level=info msg="Container cc8bc5edc6723f1349da720d2142f7201cf6329d79936d4c7242a4ae25e5515a: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:52:11.844833 containerd[1640]: time="2025-12-16T03:52:11.844327263Z" level=info msg="CreateContainer within sandbox \"793f61bfdb2166076a168528ebe2ed4f99504b405189a92f1d388ba5bcc5b311\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cc8bc5edc6723f1349da720d2142f7201cf6329d79936d4c7242a4ae25e5515a\"" Dec 16 03:52:11.846804 containerd[1640]: time="2025-12-16T03:52:11.845443059Z" level=info msg="StartContainer for \"cc8bc5edc6723f1349da720d2142f7201cf6329d79936d4c7242a4ae25e5515a\"" Dec 16 03:52:11.848080 containerd[1640]: time="2025-12-16T03:52:11.847325467Z" level=info msg="connecting to shim cc8bc5edc6723f1349da720d2142f7201cf6329d79936d4c7242a4ae25e5515a" address="unix:///run/containerd/s/db6f9196e37be857df44069c54f9d7aef73550c529496611596894a088f30c0b" protocol=ttrpc version=3 Dec 16 03:52:11.866067 containerd[1640]: time="2025-12-16T03:52:11.866001554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-t7hn8,Uid:dc9166da-69d7-4645-a887-c693447f170b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"81845f533ad77320141073bdd6586acc2a409059b6de845cab947c02fb3c7f54\"" Dec 16 03:52:11.870644 containerd[1640]: time="2025-12-16T03:52:11.870497931Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 03:52:11.886231 systemd[1]: Started cri-containerd-cc8bc5edc6723f1349da720d2142f7201cf6329d79936d4c7242a4ae25e5515a.scope - libcontainer container cc8bc5edc6723f1349da720d2142f7201cf6329d79936d4c7242a4ae25e5515a. Dec 16 03:52:11.968000 audit: BPF prog-id=147 op=LOAD Dec 16 03:52:11.968000 audit[3120]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3045 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363386263356564633637323366313334396461373230643231343266 Dec 16 03:52:11.969000 audit: BPF prog-id=148 op=LOAD Dec 16 03:52:11.969000 audit[3120]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3045 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363386263356564633637323366313334396461373230643231343266 Dec 16 03:52:11.969000 audit: BPF prog-id=148 op=UNLOAD Dec 16 03:52:11.969000 audit[3120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3045 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363386263356564633637323366313334396461373230643231343266 Dec 16 03:52:11.969000 audit: BPF prog-id=147 op=UNLOAD Dec 16 03:52:11.969000 audit[3120]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3045 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363386263356564633637323366313334396461373230643231343266 Dec 16 03:52:11.969000 audit: BPF prog-id=149 op=LOAD Dec 16 03:52:11.969000 audit[3120]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3045 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:11.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363386263356564633637323366313334396461373230643231343266 Dec 16 03:52:12.000002 containerd[1640]: time="2025-12-16T03:52:11.999920050Z" level=info msg="StartContainer for \"cc8bc5edc6723f1349da720d2142f7201cf6329d79936d4c7242a4ae25e5515a\" returns successfully" Dec 16 03:52:12.318484 kubelet[2985]: I1216 03:52:12.318359 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mnm77" podStartSLOduration=1.318305751 podStartE2EDuration="1.318305751s" podCreationTimestamp="2025-12-16 03:52:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:52:12.318130435 +0000 UTC m=+8.332757473" watchObservedRunningTime="2025-12-16 03:52:12.318305751 +0000 UTC m=+8.332932768" Dec 16 03:52:12.541000 audit[3190]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.541000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcbf833a80 a2=0 a3=7ffcbf833a6c items=0 ppid=3138 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:52:12.543000 audit[3191]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3191 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.543000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1237d630 a2=0 a3=7ffd1237d61c items=0 ppid=3138 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.543000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:52:12.544000 audit[3192]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.544000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffa19b680 a2=0 a3=7ffffa19b66c items=0 ppid=3138 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.544000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:52:12.546000 audit[3194]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.546000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc95a182d0 a2=0 a3=7ffc95a182bc items=0 ppid=3138 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.546000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:52:12.547000 audit[3195]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.547000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc08961f60 a2=0 a3=7ffc08961f4c items=0 ppid=3138 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.547000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:52:12.549000 audit[3196]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.549000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff60d54330 a2=0 a3=7fff60d5431c items=0 ppid=3138 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.549000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:52:12.652000 audit[3199]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.652000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe6a123450 a2=0 a3=7ffe6a12343c items=0 ppid=3138 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.652000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:52:12.658000 audit[3201]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.658000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff81f889b0 a2=0 a3=7fff81f8899c items=0 ppid=3138 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.658000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 03:52:12.665000 audit[3204]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.665000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc38987310 a2=0 a3=7ffc389872fc items=0 ppid=3138 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.665000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 03:52:12.667000 audit[3205]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.667000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd05e1a940 a2=0 a3=7ffd05e1a92c items=0 ppid=3138 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.667000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:52:12.671000 audit[3207]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.671000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffe4e481f0 a2=0 a3=7fffe4e481dc items=0 ppid=3138 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.671000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:52:12.672000 audit[3208]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3208 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.672000 audit[3208]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8d288910 a2=0 a3=7ffc8d2888fc items=0 ppid=3138 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.672000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:52:12.678000 audit[3210]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.678000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc64a283e0 a2=0 a3=7ffc64a283cc items=0 ppid=3138 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.678000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:52:12.684000 audit[3213]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.684000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd8d86ed40 a2=0 a3=7ffd8d86ed2c items=0 ppid=3138 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 03:52:12.686000 audit[3214]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.686000 audit[3214]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce5e1f120 a2=0 a3=7ffce5e1f10c items=0 ppid=3138 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.686000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:52:12.690000 audit[3216]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.690000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc29be7020 a2=0 a3=7ffc29be700c items=0 ppid=3138 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.690000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:52:12.692000 audit[3217]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.692000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffdce84c30 a2=0 a3=7fffdce84c1c items=0 ppid=3138 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.692000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:52:12.698000 audit[3219]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.698000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffec165b3b0 a2=0 a3=7ffec165b39c items=0 ppid=3138 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.698000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:52:12.703000 audit[3222]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.703000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffffa5fd740 a2=0 a3=7ffffa5fd72c items=0 ppid=3138 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.703000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:52:12.709000 audit[3225]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.709000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffef53ac8b0 a2=0 a3=7ffef53ac89c items=0 ppid=3138 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.709000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:52:12.711000 audit[3226]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.711000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff589576b0 a2=0 a3=7fff5895769c items=0 ppid=3138 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.711000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:52:12.716000 audit[3228]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.716000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe7b3b4220 a2=0 a3=7ffe7b3b420c items=0 ppid=3138 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.716000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:52:12.721000 audit[3231]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.721000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffad362bb0 a2=0 a3=7fffad362b9c items=0 ppid=3138 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.721000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:52:12.723000 audit[3232]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.723000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc977c4be0 a2=0 a3=7ffc977c4bcc items=0 ppid=3138 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.723000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:52:12.728000 audit[3234]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:52:12.728000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe4198bd70 a2=0 a3=7ffe4198bd5c items=0 ppid=3138 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.728000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:52:12.775000 audit[3240]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:12.775000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff8f641e90 a2=0 a3=7fff8f641e7c items=0 ppid=3138 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:12.784000 audit[3240]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:12.784000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff8f641e90 a2=0 a3=7fff8f641e7c items=0 ppid=3138 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.784000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:12.786000 audit[3246]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.786000 audit[3246]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffca1a5dd00 a2=0 a3=7ffca1a5dcec items=0 ppid=3138 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.786000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:52:12.790000 audit[3248]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3248 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.790000 audit[3248]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdf5bcb250 a2=0 a3=7ffdf5bcb23c items=0 ppid=3138 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 03:52:12.797000 audit[3251]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.797000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeb2690190 a2=0 a3=7ffeb269017c items=0 ppid=3138 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.797000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 03:52:12.801000 audit[3252]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3252 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.801000 audit[3252]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef1ed9d50 a2=0 a3=7ffef1ed9d3c items=0 ppid=3138 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.801000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:52:12.804000 audit[3254]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3254 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.804000 audit[3254]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff312a50d0 a2=0 a3=7fff312a50bc items=0 ppid=3138 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:52:12.807000 audit[3255]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.807000 audit[3255]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6ae74a60 a2=0 a3=7ffc6ae74a4c items=0 ppid=3138 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:52:12.811000 audit[3257]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3257 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.811000 audit[3257]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc60ee6d40 a2=0 a3=7ffc60ee6d2c items=0 ppid=3138 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.811000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 03:52:12.818000 audit[3260]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.818000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc1d6df2f0 a2=0 a3=7ffc1d6df2dc items=0 ppid=3138 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:52:12.820000 audit[3261]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.820000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff452c33e0 a2=0 a3=7fff452c33cc items=0 ppid=3138 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.820000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:52:12.824000 audit[3263]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3263 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.824000 audit[3263]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe77319d80 a2=0 a3=7ffe77319d6c items=0 ppid=3138 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.824000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:52:12.826000 audit[3264]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.826000 audit[3264]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd98d5bc40 a2=0 a3=7ffd98d5bc2c items=0 ppid=3138 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.826000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:52:12.831000 audit[3266]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.831000 audit[3266]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd66bce210 a2=0 a3=7ffd66bce1fc items=0 ppid=3138 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.831000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:52:12.837000 audit[3269]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.837000 audit[3269]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe53156c30 a2=0 a3=7ffe53156c1c items=0 ppid=3138 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.837000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:52:12.844000 audit[3272]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.844000 audit[3272]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeda0d2f60 a2=0 a3=7ffeda0d2f4c items=0 ppid=3138 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.844000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 03:52:12.847000 audit[3273]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3273 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.847000 audit[3273]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff08e40df0 a2=0 a3=7fff08e40ddc items=0 ppid=3138 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:52:12.851000 audit[3275]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.851000 audit[3275]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffdf08f3f0 a2=0 a3=7fffdf08f3dc items=0 ppid=3138 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.851000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:52:12.858000 audit[3278]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.858000 audit[3278]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc1a5a65e0 a2=0 a3=7ffc1a5a65cc items=0 ppid=3138 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.858000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:52:12.860000 audit[3279]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.860000 audit[3279]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5ab3e3c0 a2=0 a3=7ffe5ab3e3ac items=0 ppid=3138 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.860000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:52:12.864000 audit[3281]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3281 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.864000 audit[3281]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd18b72c70 a2=0 a3=7ffd18b72c5c items=0 ppid=3138 pid=3281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.864000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:52:12.865000 audit[3282]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3282 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.865000 audit[3282]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa1adc0f0 a2=0 a3=7fffa1adc0dc items=0 ppid=3138 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:52:12.870000 audit[3284]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.870000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd8d82fff0 a2=0 a3=7ffd8d82ffdc items=0 ppid=3138 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.870000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:52:12.876000 audit[3287]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:52:12.876000 audit[3287]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc6457a330 a2=0 a3=7ffc6457a31c items=0 ppid=3138 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:52:12.886000 audit[3289]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3289 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:52:12.886000 audit[3289]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc4dca01e0 a2=0 a3=7ffc4dca01cc items=0 ppid=3138 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.886000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:12.887000 audit[3289]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3289 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:52:12.887000 audit[3289]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc4dca01e0 a2=0 a3=7ffc4dca01cc items=0 ppid=3138 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:12.887000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:14.040305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3393144942.mount: Deactivated successfully. Dec 16 03:52:15.479767 containerd[1640]: time="2025-12-16T03:52:15.478684865Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:52:15.479767 containerd[1640]: time="2025-12-16T03:52:15.479712371Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 03:52:15.480742 containerd[1640]: time="2025-12-16T03:52:15.480674941Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:52:15.492580 containerd[1640]: time="2025-12-16T03:52:15.492508803Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:52:15.494284 containerd[1640]: time="2025-12-16T03:52:15.493655793Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.623098846s" Dec 16 03:52:15.494284 containerd[1640]: time="2025-12-16T03:52:15.493704693Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 03:52:15.500235 containerd[1640]: time="2025-12-16T03:52:15.500161170Z" level=info msg="CreateContainer within sandbox \"81845f533ad77320141073bdd6586acc2a409059b6de845cab947c02fb3c7f54\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 03:52:15.520867 containerd[1640]: time="2025-12-16T03:52:15.520197827Z" level=info msg="Container ec29133c16a35ce3c1bfbf484aee30e5ff87c273de9b7078647c2257f3dccd7f: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:52:15.524085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3515176299.mount: Deactivated successfully. Dec 16 03:52:15.531140 containerd[1640]: time="2025-12-16T03:52:15.531069250Z" level=info msg="CreateContainer within sandbox \"81845f533ad77320141073bdd6586acc2a409059b6de845cab947c02fb3c7f54\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ec29133c16a35ce3c1bfbf484aee30e5ff87c273de9b7078647c2257f3dccd7f\"" Dec 16 03:52:15.532623 containerd[1640]: time="2025-12-16T03:52:15.532564137Z" level=info msg="StartContainer for \"ec29133c16a35ce3c1bfbf484aee30e5ff87c273de9b7078647c2257f3dccd7f\"" Dec 16 03:52:15.536265 containerd[1640]: time="2025-12-16T03:52:15.534862720Z" level=info msg="connecting to shim ec29133c16a35ce3c1bfbf484aee30e5ff87c273de9b7078647c2257f3dccd7f" address="unix:///run/containerd/s/30ddd387b0df9638714496e5e196e946a0d49a7b72228ace67efc34d5a89a050" protocol=ttrpc version=3 Dec 16 03:52:15.569409 systemd[1]: Started cri-containerd-ec29133c16a35ce3c1bfbf484aee30e5ff87c273de9b7078647c2257f3dccd7f.scope - libcontainer container ec29133c16a35ce3c1bfbf484aee30e5ff87c273de9b7078647c2257f3dccd7f. Dec 16 03:52:15.594000 audit: BPF prog-id=150 op=LOAD Dec 16 03:52:15.595000 audit: BPF prog-id=151 op=LOAD Dec 16 03:52:15.595000 audit[3299]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3060 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:15.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563323931333363313661333563653363316266626634383461656533 Dec 16 03:52:15.595000 audit: BPF prog-id=151 op=UNLOAD Dec 16 03:52:15.595000 audit[3299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3060 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:15.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563323931333363313661333563653363316266626634383461656533 Dec 16 03:52:15.596000 audit: BPF prog-id=152 op=LOAD Dec 16 03:52:15.596000 audit[3299]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3060 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:15.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563323931333363313661333563653363316266626634383461656533 Dec 16 03:52:15.596000 audit: BPF prog-id=153 op=LOAD Dec 16 03:52:15.596000 audit[3299]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3060 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:15.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563323931333363313661333563653363316266626634383461656533 Dec 16 03:52:15.596000 audit: BPF prog-id=153 op=UNLOAD Dec 16 03:52:15.596000 audit[3299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3060 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:15.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563323931333363313661333563653363316266626634383461656533 Dec 16 03:52:15.596000 audit: BPF prog-id=152 op=UNLOAD Dec 16 03:52:15.596000 audit[3299]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3060 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:15.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563323931333363313661333563653363316266626634383461656533 Dec 16 03:52:15.596000 audit: BPF prog-id=154 op=LOAD Dec 16 03:52:15.596000 audit[3299]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3060 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:15.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563323931333363313661333563653363316266626634383461656533 Dec 16 03:52:15.629018 containerd[1640]: time="2025-12-16T03:52:15.628949023Z" level=info msg="StartContainer for \"ec29133c16a35ce3c1bfbf484aee30e5ff87c273de9b7078647c2257f3dccd7f\" returns successfully" Dec 16 03:52:16.332368 kubelet[2985]: I1216 03:52:16.332290 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-t7hn8" podStartSLOduration=1.706882209 podStartE2EDuration="5.332256622s" podCreationTimestamp="2025-12-16 03:52:11 +0000 UTC" firstStartedPulling="2025-12-16 03:52:11.869540369 +0000 UTC m=+7.884167380" lastFinishedPulling="2025-12-16 03:52:15.494914772 +0000 UTC m=+11.509541793" observedRunningTime="2025-12-16 03:52:16.331988813 +0000 UTC m=+12.346615849" watchObservedRunningTime="2025-12-16 03:52:16.332256622 +0000 UTC m=+12.346883649" Dec 16 03:52:23.343077 sudo[1949]: pam_unix(sudo:session): session closed for user root Dec 16 03:52:23.353258 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 03:52:23.353383 kernel: audit: type=1106 audit(1765857143.342:521): pid=1949 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:52:23.342000 audit[1949]: USER_END pid=1949 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:52:23.342000 audit[1949]: CRED_DISP pid=1949 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:52:23.367100 kernel: audit: type=1104 audit(1765857143.342:522): pid=1949 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:52:23.505920 sshd[1948]: Connection closed by 139.178.89.65 port 59584 Dec 16 03:52:23.507101 sshd-session[1944]: pam_unix(sshd:session): session closed for user core Dec 16 03:52:23.509000 audit[1944]: USER_END pid=1944 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:52:23.519755 kernel: audit: type=1106 audit(1765857143.509:523): pid=1944 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:52:23.522108 systemd-logind[1614]: Session 12 logged out. Waiting for processes to exit. Dec 16 03:52:23.522626 systemd[1]: sshd@8-10.230.36.234:22-139.178.89.65:59584.service: Deactivated successfully. Dec 16 03:52:23.509000 audit[1944]: CRED_DISP pid=1944 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:52:23.531818 kernel: audit: type=1104 audit(1765857143.509:524): pid=1944 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:52:23.533564 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 03:52:23.534213 systemd[1]: session-12.scope: Consumed 7.439s CPU time, 155.8M memory peak. Dec 16 03:52:23.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.36.234:22-139.178.89.65:59584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:52:23.550910 kernel: audit: type=1131 audit(1765857143.523:525): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.36.234:22-139.178.89.65:59584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:52:23.556067 systemd-logind[1614]: Removed session 12. Dec 16 03:52:24.180000 audit[3381]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:24.188767 kernel: audit: type=1325 audit(1765857144.180:526): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:24.180000 audit[3381]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcb3e512b0 a2=0 a3=7ffcb3e5129c items=0 ppid=3138 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:24.197760 kernel: audit: type=1300 audit(1765857144.180:526): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcb3e512b0 a2=0 a3=7ffcb3e5129c items=0 ppid=3138 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:24.180000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:24.202988 kernel: audit: type=1327 audit(1765857144.180:526): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:24.197000 audit[3381]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:24.207768 kernel: audit: type=1325 audit(1765857144.197:527): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:24.197000 audit[3381]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcb3e512b0 a2=0 a3=0 items=0 ppid=3138 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:24.215759 kernel: audit: type=1300 audit(1765857144.197:527): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcb3e512b0 a2=0 a3=0 items=0 ppid=3138 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:24.197000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:24.229000 audit[3383]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:24.229000 audit[3383]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd642e6ed0 a2=0 a3=7ffd642e6ebc items=0 ppid=3138 pid=3383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:24.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:24.237000 audit[3383]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3383 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:24.237000 audit[3383]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd642e6ed0 a2=0 a3=0 items=0 ppid=3138 pid=3383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:24.237000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:27.614000 audit[3386]: NETFILTER_CFG table=filter:109 family=2 entries=16 op=nft_register_rule pid=3386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:27.614000 audit[3386]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdd8218140 a2=0 a3=7ffdd821812c items=0 ppid=3138 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:27.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:27.619000 audit[3386]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:27.619000 audit[3386]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd8218140 a2=0 a3=0 items=0 ppid=3138 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:27.619000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:27.683000 audit[3388]: NETFILTER_CFG table=filter:111 family=2 entries=17 op=nft_register_rule pid=3388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:27.683000 audit[3388]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd08ce0ea0 a2=0 a3=7ffd08ce0e8c items=0 ppid=3138 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:27.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:27.693000 audit[3388]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:27.693000 audit[3388]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd08ce0ea0 a2=0 a3=0 items=0 ppid=3138 pid=3388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:27.693000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:28.721435 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 03:52:28.721687 kernel: audit: type=1325 audit(1765857148.711:534): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3390 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:28.711000 audit[3390]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3390 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:28.711000 audit[3390]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffbb830290 a2=0 a3=7fffbb83027c items=0 ppid=3138 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:28.729017 kernel: audit: type=1300 audit(1765857148.711:534): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffbb830290 a2=0 a3=7fffbb83027c items=0 ppid=3138 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:28.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:28.736648 kernel: audit: type=1327 audit(1765857148.711:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:28.736744 kernel: audit: type=1325 audit(1765857148.727:535): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3390 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:28.727000 audit[3390]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3390 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:28.727000 audit[3390]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffbb830290 a2=0 a3=0 items=0 ppid=3138 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:28.746650 kernel: audit: type=1300 audit(1765857148.727:535): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffbb830290 a2=0 a3=0 items=0 ppid=3138 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:28.746791 kernel: audit: type=1327 audit(1765857148.727:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:28.727000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:30.062524 systemd[1]: Created slice kubepods-besteffort-pod2f05687e_b6f5_4315_89f2_89dc20f0e8f2.slice - libcontainer container kubepods-besteffort-pod2f05687e_b6f5_4315_89f2_89dc20f0e8f2.slice. Dec 16 03:52:30.065994 kubelet[2985]: I1216 03:52:30.065692 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f05687e-b6f5-4315-89f2-89dc20f0e8f2-tigera-ca-bundle\") pod \"calico-typha-c7c55c847-pkz2f\" (UID: \"2f05687e-b6f5-4315-89f2-89dc20f0e8f2\") " pod="calico-system/calico-typha-c7c55c847-pkz2f" Dec 16 03:52:30.067173 kubelet[2985]: I1216 03:52:30.066506 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2f05687e-b6f5-4315-89f2-89dc20f0e8f2-typha-certs\") pod \"calico-typha-c7c55c847-pkz2f\" (UID: \"2f05687e-b6f5-4315-89f2-89dc20f0e8f2\") " pod="calico-system/calico-typha-c7c55c847-pkz2f" Dec 16 03:52:30.067173 kubelet[2985]: I1216 03:52:30.066546 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m22ls\" (UniqueName: \"kubernetes.io/projected/2f05687e-b6f5-4315-89f2-89dc20f0e8f2-kube-api-access-m22ls\") pod \"calico-typha-c7c55c847-pkz2f\" (UID: \"2f05687e-b6f5-4315-89f2-89dc20f0e8f2\") " pod="calico-system/calico-typha-c7c55c847-pkz2f" Dec 16 03:52:30.092000 audit[3392]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3392 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:30.098838 kernel: audit: type=1325 audit(1765857150.092:536): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3392 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:30.092000 audit[3392]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff02e16450 a2=0 a3=7fff02e1643c items=0 ppid=3138 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.106759 kernel: audit: type=1300 audit(1765857150.092:536): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff02e16450 a2=0 a3=7fff02e1643c items=0 ppid=3138 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.092000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:30.111776 kernel: audit: type=1327 audit(1765857150.092:536): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:30.112000 audit[3392]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3392 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:30.116793 kernel: audit: type=1325 audit(1765857150.112:537): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3392 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:30.112000 audit[3392]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff02e16450 a2=0 a3=0 items=0 ppid=3138 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.112000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:30.384925 containerd[1640]: time="2025-12-16T03:52:30.383912314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c7c55c847-pkz2f,Uid:2f05687e-b6f5-4315-89f2-89dc20f0e8f2,Namespace:calico-system,Attempt:0,}" Dec 16 03:52:30.440862 systemd[1]: Created slice kubepods-besteffort-poda661d69c_5047_46b0_9863_7868df01e883.slice - libcontainer container kubepods-besteffort-poda661d69c_5047_46b0_9863_7868df01e883.slice. Dec 16 03:52:30.454173 containerd[1640]: time="2025-12-16T03:52:30.453891937Z" level=info msg="connecting to shim d53d6757473bfa4c47729cabbf7375f15f36491b2ae14446bbf5ae4952657ec7" address="unix:///run/containerd/s/4a24025e39f6d2fee334c20804b941f9841ba1a45d18a9cc76d72832a40b828a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:52:30.469837 kubelet[2985]: I1216 03:52:30.469781 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a661d69c-5047-46b0-9863-7868df01e883-cni-log-dir\") pod \"calico-node-sdkdt\" (UID: \"a661d69c-5047-46b0-9863-7868df01e883\") " pod="calico-system/calico-node-sdkdt" Dec 16 03:52:30.469948 kubelet[2985]: I1216 03:52:30.469883 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a661d69c-5047-46b0-9863-7868df01e883-policysync\") pod \"calico-node-sdkdt\" (UID: \"a661d69c-5047-46b0-9863-7868df01e883\") " pod="calico-system/calico-node-sdkdt" Dec 16 03:52:30.470029 kubelet[2985]: I1216 03:52:30.469942 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5qcr\" (UniqueName: \"kubernetes.io/projected/a661d69c-5047-46b0-9863-7868df01e883-kube-api-access-n5qcr\") pod \"calico-node-sdkdt\" (UID: \"a661d69c-5047-46b0-9863-7868df01e883\") " pod="calico-system/calico-node-sdkdt" Dec 16 03:52:30.470029 kubelet[2985]: I1216 03:52:30.470012 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a661d69c-5047-46b0-9863-7868df01e883-lib-modules\") pod \"calico-node-sdkdt\" (UID: \"a661d69c-5047-46b0-9863-7868df01e883\") " pod="calico-system/calico-node-sdkdt" Dec 16 03:52:30.470145 kubelet[2985]: I1216 03:52:30.470045 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a661d69c-5047-46b0-9863-7868df01e883-node-certs\") pod \"calico-node-sdkdt\" (UID: \"a661d69c-5047-46b0-9863-7868df01e883\") " pod="calico-system/calico-node-sdkdt" Dec 16 03:52:30.470145 kubelet[2985]: I1216 03:52:30.470094 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a661d69c-5047-46b0-9863-7868df01e883-tigera-ca-bundle\") pod \"calico-node-sdkdt\" (UID: \"a661d69c-5047-46b0-9863-7868df01e883\") " pod="calico-system/calico-node-sdkdt" Dec 16 03:52:30.470233 kubelet[2985]: I1216 03:52:30.470131 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a661d69c-5047-46b0-9863-7868df01e883-var-run-calico\") pod \"calico-node-sdkdt\" (UID: \"a661d69c-5047-46b0-9863-7868df01e883\") " pod="calico-system/calico-node-sdkdt" Dec 16 03:52:30.470233 kubelet[2985]: I1216 03:52:30.470222 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a661d69c-5047-46b0-9863-7868df01e883-xtables-lock\") pod \"calico-node-sdkdt\" (UID: \"a661d69c-5047-46b0-9863-7868df01e883\") " pod="calico-system/calico-node-sdkdt" Dec 16 03:52:30.471883 kubelet[2985]: I1216 03:52:30.470382 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a661d69c-5047-46b0-9863-7868df01e883-cni-net-dir\") pod \"calico-node-sdkdt\" (UID: \"a661d69c-5047-46b0-9863-7868df01e883\") " pod="calico-system/calico-node-sdkdt" Dec 16 03:52:30.471883 kubelet[2985]: I1216 03:52:30.470418 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a661d69c-5047-46b0-9863-7868df01e883-var-lib-calico\") pod \"calico-node-sdkdt\" (UID: \"a661d69c-5047-46b0-9863-7868df01e883\") " pod="calico-system/calico-node-sdkdt" Dec 16 03:52:30.471883 kubelet[2985]: I1216 03:52:30.470561 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a661d69c-5047-46b0-9863-7868df01e883-cni-bin-dir\") pod \"calico-node-sdkdt\" (UID: \"a661d69c-5047-46b0-9863-7868df01e883\") " pod="calico-system/calico-node-sdkdt" Dec 16 03:52:30.471883 kubelet[2985]: I1216 03:52:30.470855 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a661d69c-5047-46b0-9863-7868df01e883-flexvol-driver-host\") pod \"calico-node-sdkdt\" (UID: \"a661d69c-5047-46b0-9863-7868df01e883\") " pod="calico-system/calico-node-sdkdt" Dec 16 03:52:30.534002 systemd[1]: Started cri-containerd-d53d6757473bfa4c47729cabbf7375f15f36491b2ae14446bbf5ae4952657ec7.scope - libcontainer container d53d6757473bfa4c47729cabbf7375f15f36491b2ae14446bbf5ae4952657ec7. Dec 16 03:52:30.569000 audit: BPF prog-id=155 op=LOAD Dec 16 03:52:30.569000 audit: BPF prog-id=156 op=LOAD Dec 16 03:52:30.569000 audit[3416]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3402 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435336436373537343733626661346334373732396361626266373337 Dec 16 03:52:30.570000 audit: BPF prog-id=156 op=UNLOAD Dec 16 03:52:30.570000 audit[3416]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3402 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435336436373537343733626661346334373732396361626266373337 Dec 16 03:52:30.570000 audit: BPF prog-id=157 op=LOAD Dec 16 03:52:30.570000 audit[3416]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3402 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435336436373537343733626661346334373732396361626266373337 Dec 16 03:52:30.570000 audit: BPF prog-id=158 op=LOAD Dec 16 03:52:30.570000 audit[3416]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3402 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435336436373537343733626661346334373732396361626266373337 Dec 16 03:52:30.570000 audit: BPF prog-id=158 op=UNLOAD Dec 16 03:52:30.570000 audit[3416]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3402 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435336436373537343733626661346334373732396361626266373337 Dec 16 03:52:30.570000 audit: BPF prog-id=157 op=UNLOAD Dec 16 03:52:30.570000 audit[3416]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3402 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435336436373537343733626661346334373732396361626266373337 Dec 16 03:52:30.570000 audit: BPF prog-id=159 op=LOAD Dec 16 03:52:30.570000 audit[3416]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3402 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435336436373537343733626661346334373732396361626266373337 Dec 16 03:52:30.587082 kubelet[2985]: E1216 03:52:30.586930 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.587082 kubelet[2985]: W1216 03:52:30.586968 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.587082 kubelet[2985]: E1216 03:52:30.587029 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.595919 kubelet[2985]: E1216 03:52:30.595794 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.595919 kubelet[2985]: W1216 03:52:30.595822 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.595919 kubelet[2985]: E1216 03:52:30.595850 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.599996 kubelet[2985]: E1216 03:52:30.599901 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.599996 kubelet[2985]: W1216 03:52:30.599929 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.599996 kubelet[2985]: E1216 03:52:30.599948 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.661060 kubelet[2985]: E1216 03:52:30.660855 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:52:30.669308 kubelet[2985]: E1216 03:52:30.669117 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.669308 kubelet[2985]: W1216 03:52:30.669150 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.669308 kubelet[2985]: E1216 03:52:30.669183 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.669989 kubelet[2985]: E1216 03:52:30.669900 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.669989 kubelet[2985]: W1216 03:52:30.670032 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.669989 kubelet[2985]: E1216 03:52:30.670054 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.671189 kubelet[2985]: E1216 03:52:30.670996 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.671189 kubelet[2985]: W1216 03:52:30.671019 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.671189 kubelet[2985]: E1216 03:52:30.671037 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.671829 kubelet[2985]: E1216 03:52:30.671807 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.671947 kubelet[2985]: W1216 03:52:30.671924 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.672077 kubelet[2985]: E1216 03:52:30.672051 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.672753 kubelet[2985]: E1216 03:52:30.672510 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.672753 kubelet[2985]: W1216 03:52:30.672529 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.672753 kubelet[2985]: E1216 03:52:30.672545 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.673883 kubelet[2985]: E1216 03:52:30.673861 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.674126 kubelet[2985]: W1216 03:52:30.673987 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.674126 kubelet[2985]: E1216 03:52:30.674015 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.674402 kubelet[2985]: E1216 03:52:30.674369 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.674627 kubelet[2985]: W1216 03:52:30.674486 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.674627 kubelet[2985]: E1216 03:52:30.674510 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.674919 kubelet[2985]: E1216 03:52:30.674897 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.675030 kubelet[2985]: W1216 03:52:30.675008 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.675883 kubelet[2985]: E1216 03:52:30.675103 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.676165 kubelet[2985]: E1216 03:52:30.676144 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.676271 kubelet[2985]: W1216 03:52:30.676250 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.676364 kubelet[2985]: E1216 03:52:30.676343 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.676734 kubelet[2985]: E1216 03:52:30.676681 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.676734 kubelet[2985]: W1216 03:52:30.676701 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.677021 kubelet[2985]: E1216 03:52:30.676896 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.677556 kubelet[2985]: E1216 03:52:30.677182 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.678086 kubelet[2985]: W1216 03:52:30.677934 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.678086 kubelet[2985]: E1216 03:52:30.677963 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.678859 kubelet[2985]: E1216 03:52:30.678837 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.679433 kubelet[2985]: W1216 03:52:30.678978 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.679433 kubelet[2985]: E1216 03:52:30.679004 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.681138 kubelet[2985]: E1216 03:52:30.680863 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.681138 kubelet[2985]: W1216 03:52:30.680884 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.681138 kubelet[2985]: E1216 03:52:30.680909 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.682527 kubelet[2985]: E1216 03:52:30.682351 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.682527 kubelet[2985]: W1216 03:52:30.682372 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.682527 kubelet[2985]: E1216 03:52:30.682405 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.683839 kubelet[2985]: E1216 03:52:30.683159 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.683839 kubelet[2985]: W1216 03:52:30.683178 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.683839 kubelet[2985]: E1216 03:52:30.683195 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.685625 kubelet[2985]: E1216 03:52:30.685454 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.685625 kubelet[2985]: W1216 03:52:30.685475 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.685625 kubelet[2985]: E1216 03:52:30.685492 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.686813 kubelet[2985]: E1216 03:52:30.686275 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.686813 kubelet[2985]: W1216 03:52:30.686294 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.686813 kubelet[2985]: E1216 03:52:30.686312 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.689327 kubelet[2985]: E1216 03:52:30.688815 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.689327 kubelet[2985]: W1216 03:52:30.688836 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.689327 kubelet[2985]: E1216 03:52:30.688853 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.689936 kubelet[2985]: E1216 03:52:30.689822 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.690897 kubelet[2985]: W1216 03:52:30.690872 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.691132 kubelet[2985]: E1216 03:52:30.691108 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.692749 kubelet[2985]: E1216 03:52:30.692154 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.692749 kubelet[2985]: W1216 03:52:30.692174 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.692749 kubelet[2985]: E1216 03:52:30.692190 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.693457 kubelet[2985]: E1216 03:52:30.693407 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.693647 kubelet[2985]: W1216 03:52:30.693624 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.693789 kubelet[2985]: E1216 03:52:30.693767 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.693931 kubelet[2985]: I1216 03:52:30.693905 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68135f19-2992-417d-b99b-f5dddddbe1d3-registration-dir\") pod \"csi-node-driver-4px76\" (UID: \"68135f19-2992-417d-b99b-f5dddddbe1d3\") " pod="calico-system/csi-node-driver-4px76" Dec 16 03:52:30.695213 kubelet[2985]: E1216 03:52:30.695108 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.695213 kubelet[2985]: W1216 03:52:30.695140 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.695213 kubelet[2985]: E1216 03:52:30.695180 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.695649 kubelet[2985]: E1216 03:52:30.695612 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.695649 kubelet[2985]: W1216 03:52:30.695635 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.695794 kubelet[2985]: E1216 03:52:30.695656 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.697180 kubelet[2985]: E1216 03:52:30.697118 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.697180 kubelet[2985]: W1216 03:52:30.697170 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.697298 kubelet[2985]: E1216 03:52:30.697188 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.697298 kubelet[2985]: I1216 03:52:30.697230 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68135f19-2992-417d-b99b-f5dddddbe1d3-socket-dir\") pod \"csi-node-driver-4px76\" (UID: \"68135f19-2992-417d-b99b-f5dddddbe1d3\") " pod="calico-system/csi-node-driver-4px76" Dec 16 03:52:30.697624 kubelet[2985]: E1216 03:52:30.697598 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.697703 kubelet[2985]: W1216 03:52:30.697646 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.697703 kubelet[2985]: E1216 03:52:30.697678 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.698127 kubelet[2985]: I1216 03:52:30.698077 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/68135f19-2992-417d-b99b-f5dddddbe1d3-varrun\") pod \"csi-node-driver-4px76\" (UID: \"68135f19-2992-417d-b99b-f5dddddbe1d3\") " pod="calico-system/csi-node-driver-4px76" Dec 16 03:52:30.698983 kubelet[2985]: E1216 03:52:30.698940 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.698983 kubelet[2985]: W1216 03:52:30.698967 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.698983 kubelet[2985]: E1216 03:52:30.698984 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.700323 kubelet[2985]: E1216 03:52:30.699559 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.700323 kubelet[2985]: W1216 03:52:30.699583 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.700323 kubelet[2985]: E1216 03:52:30.699600 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.701482 kubelet[2985]: E1216 03:52:30.700827 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.701482 kubelet[2985]: W1216 03:52:30.700950 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.701482 kubelet[2985]: E1216 03:52:30.700975 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.701482 kubelet[2985]: I1216 03:52:30.701011 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrplh\" (UniqueName: \"kubernetes.io/projected/68135f19-2992-417d-b99b-f5dddddbe1d3-kube-api-access-wrplh\") pod \"csi-node-driver-4px76\" (UID: \"68135f19-2992-417d-b99b-f5dddddbe1d3\") " pod="calico-system/csi-node-driver-4px76" Dec 16 03:52:30.702301 kubelet[2985]: E1216 03:52:30.702210 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.702668 kubelet[2985]: W1216 03:52:30.702645 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.703133 kubelet[2985]: E1216 03:52:30.702776 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.703133 kubelet[2985]: I1216 03:52:30.702826 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68135f19-2992-417d-b99b-f5dddddbe1d3-kubelet-dir\") pod \"csi-node-driver-4px76\" (UID: \"68135f19-2992-417d-b99b-f5dddddbe1d3\") " pod="calico-system/csi-node-driver-4px76" Dec 16 03:52:30.704093 kubelet[2985]: E1216 03:52:30.704047 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.704889 kubelet[2985]: W1216 03:52:30.704260 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.704889 kubelet[2985]: E1216 03:52:30.704296 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.705224 kubelet[2985]: E1216 03:52:30.705204 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.705470 kubelet[2985]: W1216 03:52:30.705315 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.705470 kubelet[2985]: E1216 03:52:30.705340 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.705749 kubelet[2985]: E1216 03:52:30.705705 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.707166 kubelet[2985]: W1216 03:52:30.705942 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.707166 kubelet[2985]: E1216 03:52:30.705966 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.707467 kubelet[2985]: E1216 03:52:30.707445 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.707571 kubelet[2985]: W1216 03:52:30.707550 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.707688 kubelet[2985]: E1216 03:52:30.707668 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.708185 kubelet[2985]: E1216 03:52:30.708082 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.708701 kubelet[2985]: W1216 03:52:30.708675 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.709010 kubelet[2985]: E1216 03:52:30.708772 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.710058 kubelet[2985]: E1216 03:52:30.709992 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.710475 kubelet[2985]: W1216 03:52:30.710245 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.710475 kubelet[2985]: E1216 03:52:30.710273 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.728383 containerd[1640]: time="2025-12-16T03:52:30.728263141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c7c55c847-pkz2f,Uid:2f05687e-b6f5-4315-89f2-89dc20f0e8f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"d53d6757473bfa4c47729cabbf7375f15f36491b2ae14446bbf5ae4952657ec7\"" Dec 16 03:52:30.732193 containerd[1640]: time="2025-12-16T03:52:30.732146153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 03:52:30.750418 containerd[1640]: time="2025-12-16T03:52:30.750342079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sdkdt,Uid:a661d69c-5047-46b0-9863-7868df01e883,Namespace:calico-system,Attempt:0,}" Dec 16 03:52:30.805076 kubelet[2985]: E1216 03:52:30.804755 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.805076 kubelet[2985]: W1216 03:52:30.804793 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.805076 kubelet[2985]: E1216 03:52:30.804827 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.805998 kubelet[2985]: E1216 03:52:30.805768 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.805998 kubelet[2985]: W1216 03:52:30.805798 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.805998 kubelet[2985]: E1216 03:52:30.805815 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.806443 kubelet[2985]: E1216 03:52:30.806381 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.806766 kubelet[2985]: W1216 03:52:30.806400 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.806766 kubelet[2985]: E1216 03:52:30.806546 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.807280 kubelet[2985]: E1216 03:52:30.807251 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.807418 kubelet[2985]: W1216 03:52:30.807382 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.807669 kubelet[2985]: E1216 03:52:30.807470 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.808098 kubelet[2985]: E1216 03:52:30.807937 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.808098 kubelet[2985]: W1216 03:52:30.807956 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.808098 kubelet[2985]: E1216 03:52:30.807972 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.808555 kubelet[2985]: E1216 03:52:30.808500 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.808555 kubelet[2985]: W1216 03:52:30.808518 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.809276 kubelet[2985]: E1216 03:52:30.808651 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.809276 kubelet[2985]: E1216 03:52:30.809136 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.809276 kubelet[2985]: W1216 03:52:30.809150 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.809276 kubelet[2985]: E1216 03:52:30.809165 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.812104 kubelet[2985]: E1216 03:52:30.812065 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.812275 kubelet[2985]: W1216 03:52:30.812243 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.812492 kubelet[2985]: E1216 03:52:30.812287 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.813637 kubelet[2985]: E1216 03:52:30.813568 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.813637 kubelet[2985]: W1216 03:52:30.813590 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.813637 kubelet[2985]: E1216 03:52:30.813608 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.813992 kubelet[2985]: E1216 03:52:30.813966 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.814796 kubelet[2985]: W1216 03:52:30.814768 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.814796 kubelet[2985]: E1216 03:52:30.814798 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.816144 kubelet[2985]: E1216 03:52:30.815162 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.816144 kubelet[2985]: W1216 03:52:30.815177 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.816144 kubelet[2985]: E1216 03:52:30.815205 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.816144 kubelet[2985]: E1216 03:52:30.815605 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.816144 kubelet[2985]: W1216 03:52:30.815619 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.816144 kubelet[2985]: E1216 03:52:30.815635 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.818030 kubelet[2985]: E1216 03:52:30.816271 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.818030 kubelet[2985]: W1216 03:52:30.816286 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.818030 kubelet[2985]: E1216 03:52:30.816302 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.818030 kubelet[2985]: E1216 03:52:30.817784 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.818030 kubelet[2985]: W1216 03:52:30.817799 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.818030 kubelet[2985]: E1216 03:52:30.817814 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.821280 kubelet[2985]: E1216 03:52:30.820990 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.821280 kubelet[2985]: W1216 03:52:30.821032 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.821280 kubelet[2985]: E1216 03:52:30.821051 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.821502 kubelet[2985]: E1216 03:52:30.821327 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.821502 kubelet[2985]: W1216 03:52:30.821362 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.821502 kubelet[2985]: E1216 03:52:30.821380 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.825648 kubelet[2985]: E1216 03:52:30.824406 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.825648 kubelet[2985]: W1216 03:52:30.824445 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.825648 kubelet[2985]: E1216 03:52:30.824463 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.826019 kubelet[2985]: E1216 03:52:30.825860 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.826019 kubelet[2985]: W1216 03:52:30.825909 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.826019 kubelet[2985]: E1216 03:52:30.825928 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.828884 kubelet[2985]: E1216 03:52:30.828352 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.828884 kubelet[2985]: W1216 03:52:30.828667 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.828884 kubelet[2985]: E1216 03:52:30.828686 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.831749 kubelet[2985]: E1216 03:52:30.831538 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.831749 kubelet[2985]: W1216 03:52:30.831564 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.831749 kubelet[2985]: E1216 03:52:30.831589 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.832322 kubelet[2985]: E1216 03:52:30.832123 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.832322 kubelet[2985]: W1216 03:52:30.832146 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.832322 kubelet[2985]: E1216 03:52:30.832163 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.832747 kubelet[2985]: E1216 03:52:30.832646 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.832747 kubelet[2985]: W1216 03:52:30.832665 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.832747 kubelet[2985]: E1216 03:52:30.832682 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.833244 kubelet[2985]: E1216 03:52:30.833224 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.833803 kubelet[2985]: W1216 03:52:30.833778 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.833908 kubelet[2985]: E1216 03:52:30.833885 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.834460 kubelet[2985]: E1216 03:52:30.834427 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.834568 kubelet[2985]: W1216 03:52:30.834546 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.834663 kubelet[2985]: E1216 03:52:30.834642 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.835920 kubelet[2985]: E1216 03:52:30.835898 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.836795 kubelet[2985]: W1216 03:52:30.836482 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.836795 kubelet[2985]: E1216 03:52:30.836511 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.837183 containerd[1640]: time="2025-12-16T03:52:30.837128131Z" level=info msg="connecting to shim 816a5814d707b62b3cba02e420f8e025ac235ef93c23f02b7e2bd4c6de376887" address="unix:///run/containerd/s/f62444bc3037e9ebd1bfdcd82e6fd65bceb964606774d7b097bb7f9680d87fa2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:52:30.886786 kubelet[2985]: E1216 03:52:30.885834 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:30.886786 kubelet[2985]: W1216 03:52:30.885865 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:30.886786 kubelet[2985]: E1216 03:52:30.885892 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:30.920986 systemd[1]: Started cri-containerd-816a5814d707b62b3cba02e420f8e025ac235ef93c23f02b7e2bd4c6de376887.scope - libcontainer container 816a5814d707b62b3cba02e420f8e025ac235ef93c23f02b7e2bd4c6de376887. Dec 16 03:52:30.941000 audit: BPF prog-id=160 op=LOAD Dec 16 03:52:30.942000 audit: BPF prog-id=161 op=LOAD Dec 16 03:52:30.942000 audit[3539]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=3513 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831366135383134643730376236326233636261303265343230663865 Dec 16 03:52:30.942000 audit: BPF prog-id=161 op=UNLOAD Dec 16 03:52:30.942000 audit[3539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3513 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831366135383134643730376236326233636261303265343230663865 Dec 16 03:52:30.942000 audit: BPF prog-id=162 op=LOAD Dec 16 03:52:30.942000 audit[3539]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=3513 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831366135383134643730376236326233636261303265343230663865 Dec 16 03:52:30.942000 audit: BPF prog-id=163 op=LOAD Dec 16 03:52:30.942000 audit[3539]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=3513 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831366135383134643730376236326233636261303265343230663865 Dec 16 03:52:30.942000 audit: BPF prog-id=163 op=UNLOAD Dec 16 03:52:30.942000 audit[3539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3513 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831366135383134643730376236326233636261303265343230663865 Dec 16 03:52:30.942000 audit: BPF prog-id=162 op=UNLOAD Dec 16 03:52:30.942000 audit[3539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3513 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831366135383134643730376236326233636261303265343230663865 Dec 16 03:52:30.942000 audit: BPF prog-id=164 op=LOAD Dec 16 03:52:30.942000 audit[3539]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=3513 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:30.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831366135383134643730376236326233636261303265343230663865 Dec 16 03:52:30.981175 containerd[1640]: time="2025-12-16T03:52:30.981125730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sdkdt,Uid:a661d69c-5047-46b0-9863-7868df01e883,Namespace:calico-system,Attempt:0,} returns sandbox id \"816a5814d707b62b3cba02e420f8e025ac235ef93c23f02b7e2bd4c6de376887\"" Dec 16 03:52:31.126000 audit[3568]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:31.126000 audit[3568]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe344d6db0 a2=0 a3=7ffe344d6d9c items=0 ppid=3138 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:31.126000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:31.130000 audit[3568]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:31.130000 audit[3568]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe344d6db0 a2=0 a3=0 items=0 ppid=3138 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:31.130000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:32.193354 kubelet[2985]: E1216 03:52:32.191178 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:52:32.540045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2224435615.mount: Deactivated successfully. Dec 16 03:52:34.192346 kubelet[2985]: E1216 03:52:34.191773 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:52:35.190762 containerd[1640]: time="2025-12-16T03:52:35.190048349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:52:35.192116 containerd[1640]: time="2025-12-16T03:52:35.192073593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 03:52:35.193494 containerd[1640]: time="2025-12-16T03:52:35.193436154Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:52:35.197461 containerd[1640]: time="2025-12-16T03:52:35.197399026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:52:35.199739 containerd[1640]: time="2025-12-16T03:52:35.199547923Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.467195858s" Dec 16 03:52:35.199739 containerd[1640]: time="2025-12-16T03:52:35.199601737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 03:52:35.200974 containerd[1640]: time="2025-12-16T03:52:35.200884502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 03:52:35.224031 containerd[1640]: time="2025-12-16T03:52:35.223982121Z" level=info msg="CreateContainer within sandbox \"d53d6757473bfa4c47729cabbf7375f15f36491b2ae14446bbf5ae4952657ec7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 03:52:35.245023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount213639018.mount: Deactivated successfully. Dec 16 03:52:35.259427 containerd[1640]: time="2025-12-16T03:52:35.259366737Z" level=info msg="Container 59f9bfe468939e96dce8d7821bd933731c131cefcbd6fa26e37dc85b05b5aabd: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:52:35.267279 containerd[1640]: time="2025-12-16T03:52:35.267183925Z" level=info msg="CreateContainer within sandbox \"d53d6757473bfa4c47729cabbf7375f15f36491b2ae14446bbf5ae4952657ec7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"59f9bfe468939e96dce8d7821bd933731c131cefcbd6fa26e37dc85b05b5aabd\"" Dec 16 03:52:35.268049 containerd[1640]: time="2025-12-16T03:52:35.268000058Z" level=info msg="StartContainer for \"59f9bfe468939e96dce8d7821bd933731c131cefcbd6fa26e37dc85b05b5aabd\"" Dec 16 03:52:35.269617 containerd[1640]: time="2025-12-16T03:52:35.269582660Z" level=info msg="connecting to shim 59f9bfe468939e96dce8d7821bd933731c131cefcbd6fa26e37dc85b05b5aabd" address="unix:///run/containerd/s/4a24025e39f6d2fee334c20804b941f9841ba1a45d18a9cc76d72832a40b828a" protocol=ttrpc version=3 Dec 16 03:52:35.341053 systemd[1]: Started cri-containerd-59f9bfe468939e96dce8d7821bd933731c131cefcbd6fa26e37dc85b05b5aabd.scope - libcontainer container 59f9bfe468939e96dce8d7821bd933731c131cefcbd6fa26e37dc85b05b5aabd. Dec 16 03:52:35.363000 audit: BPF prog-id=165 op=LOAD Dec 16 03:52:35.366358 kernel: kauditd_printk_skb: 52 callbacks suppressed Dec 16 03:52:35.366451 kernel: audit: type=1334 audit(1765857155.363:556): prog-id=165 op=LOAD Dec 16 03:52:35.368000 audit: BPF prog-id=166 op=LOAD Dec 16 03:52:35.368000 audit[3580]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3402 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:35.373808 kernel: audit: type=1334 audit(1765857155.368:557): prog-id=166 op=LOAD Dec 16 03:52:35.373876 kernel: audit: type=1300 audit(1765857155.368:557): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3402 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:35.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663962666534363839333965393664636538643738323162643933 Dec 16 03:52:35.379334 kernel: audit: type=1327 audit(1765857155.368:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663962666534363839333965393664636538643738323162643933 Dec 16 03:52:35.368000 audit: BPF prog-id=166 op=UNLOAD Dec 16 03:52:35.368000 audit[3580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3402 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:35.386737 kernel: audit: type=1334 audit(1765857155.368:558): prog-id=166 op=UNLOAD Dec 16 03:52:35.386823 kernel: audit: type=1300 audit(1765857155.368:558): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3402 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:35.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663962666534363839333965393664636538643738323162643933 Dec 16 03:52:35.392004 kernel: audit: type=1327 audit(1765857155.368:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663962666534363839333965393664636538643738323162643933 Dec 16 03:52:35.368000 audit: BPF prog-id=167 op=LOAD Dec 16 03:52:35.395911 kernel: audit: type=1334 audit(1765857155.368:559): prog-id=167 op=LOAD Dec 16 03:52:35.368000 audit[3580]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3402 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:35.398913 kernel: audit: type=1300 audit(1765857155.368:559): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3402 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:35.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663962666534363839333965393664636538643738323162643933 Dec 16 03:52:35.411740 kernel: audit: type=1327 audit(1765857155.368:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663962666534363839333965393664636538643738323162643933 Dec 16 03:52:35.369000 audit: BPF prog-id=168 op=LOAD Dec 16 03:52:35.369000 audit[3580]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3402 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:35.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663962666534363839333965393664636538643738323162643933 Dec 16 03:52:35.369000 audit: BPF prog-id=168 op=UNLOAD Dec 16 03:52:35.369000 audit[3580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3402 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:35.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663962666534363839333965393664636538643738323162643933 Dec 16 03:52:35.369000 audit: BPF prog-id=167 op=UNLOAD Dec 16 03:52:35.369000 audit[3580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3402 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:35.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663962666534363839333965393664636538643738323162643933 Dec 16 03:52:35.369000 audit: BPF prog-id=169 op=LOAD Dec 16 03:52:35.369000 audit[3580]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3402 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:35.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663962666534363839333965393664636538643738323162643933 Dec 16 03:52:35.465061 containerd[1640]: time="2025-12-16T03:52:35.462746490Z" level=info msg="StartContainer for \"59f9bfe468939e96dce8d7821bd933731c131cefcbd6fa26e37dc85b05b5aabd\" returns successfully" Dec 16 03:52:36.192541 kubelet[2985]: E1216 03:52:36.191018 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:52:36.434694 kubelet[2985]: E1216 03:52:36.434496 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.434694 kubelet[2985]: W1216 03:52:36.434530 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.434694 kubelet[2985]: E1216 03:52:36.434555 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.435102 kubelet[2985]: E1216 03:52:36.435077 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.435240 kubelet[2985]: W1216 03:52:36.435218 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.436783 kubelet[2985]: E1216 03:52:36.436758 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.437759 kubelet[2985]: E1216 03:52:36.437510 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.437759 kubelet[2985]: W1216 03:52:36.437534 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.437759 kubelet[2985]: E1216 03:52:36.437551 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.438060 kubelet[2985]: E1216 03:52:36.438040 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.438394 kubelet[2985]: W1216 03:52:36.438368 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.438520 kubelet[2985]: E1216 03:52:36.438497 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.439261 kubelet[2985]: E1216 03:52:36.439024 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.439261 kubelet[2985]: W1216 03:52:36.439043 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.439261 kubelet[2985]: E1216 03:52:36.439059 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.439576 kubelet[2985]: E1216 03:52:36.439556 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.439686 kubelet[2985]: W1216 03:52:36.439664 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.439820 kubelet[2985]: E1216 03:52:36.439800 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.440346 kubelet[2985]: E1216 03:52:36.440174 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.440346 kubelet[2985]: W1216 03:52:36.440192 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.440346 kubelet[2985]: E1216 03:52:36.440208 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.440621 kubelet[2985]: E1216 03:52:36.440602 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.440752 kubelet[2985]: W1216 03:52:36.440707 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.441004 kubelet[2985]: E1216 03:52:36.440842 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.441164 kubelet[2985]: E1216 03:52:36.441145 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.441338 kubelet[2985]: W1216 03:52:36.441265 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.441605 kubelet[2985]: E1216 03:52:36.441301 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.442822 kubelet[2985]: E1216 03:52:36.442673 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.442822 kubelet[2985]: W1216 03:52:36.442692 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.443926 kubelet[2985]: E1216 03:52:36.442711 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.444823 kubelet[2985]: I1216 03:52:36.444523 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c7c55c847-pkz2f" podStartSLOduration=1.975069583 podStartE2EDuration="6.444509818s" podCreationTimestamp="2025-12-16 03:52:30 +0000 UTC" firstStartedPulling="2025-12-16 03:52:30.73111791 +0000 UTC m=+26.745744919" lastFinishedPulling="2025-12-16 03:52:35.200558131 +0000 UTC m=+31.215185154" observedRunningTime="2025-12-16 03:52:36.442627372 +0000 UTC m=+32.457254411" watchObservedRunningTime="2025-12-16 03:52:36.444509818 +0000 UTC m=+32.459136846" Dec 16 03:52:36.445190 kubelet[2985]: E1216 03:52:36.445050 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.445260 kubelet[2985]: W1216 03:52:36.445190 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.445260 kubelet[2985]: E1216 03:52:36.445212 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.446587 kubelet[2985]: E1216 03:52:36.446557 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.446587 kubelet[2985]: W1216 03:52:36.446578 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.446754 kubelet[2985]: E1216 03:52:36.446595 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.446886 kubelet[2985]: E1216 03:52:36.446866 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.446886 kubelet[2985]: W1216 03:52:36.446886 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.447005 kubelet[2985]: E1216 03:52:36.446903 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.447344 kubelet[2985]: E1216 03:52:36.447323 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.447344 kubelet[2985]: W1216 03:52:36.447343 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.447502 kubelet[2985]: E1216 03:52:36.447359 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.447891 kubelet[2985]: E1216 03:52:36.447867 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.447891 kubelet[2985]: W1216 03:52:36.447887 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.448002 kubelet[2985]: E1216 03:52:36.447903 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.456372 kubelet[2985]: E1216 03:52:36.456168 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.456372 kubelet[2985]: W1216 03:52:36.456204 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.456372 kubelet[2985]: E1216 03:52:36.456226 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.457860 kubelet[2985]: E1216 03:52:36.457752 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.457860 kubelet[2985]: W1216 03:52:36.457781 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.457860 kubelet[2985]: E1216 03:52:36.457797 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.458854 kubelet[2985]: E1216 03:52:36.458824 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.458854 kubelet[2985]: W1216 03:52:36.458849 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.458974 kubelet[2985]: E1216 03:52:36.458869 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.460136 kubelet[2985]: E1216 03:52:36.459673 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.460136 kubelet[2985]: W1216 03:52:36.459695 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.460136 kubelet[2985]: E1216 03:52:36.459735 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.460136 kubelet[2985]: E1216 03:52:36.460036 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.460136 kubelet[2985]: W1216 03:52:36.460050 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.460136 kubelet[2985]: E1216 03:52:36.460066 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.461026 kubelet[2985]: E1216 03:52:36.460989 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.461026 kubelet[2985]: W1216 03:52:36.461018 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.461232 kubelet[2985]: E1216 03:52:36.461035 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.461692 kubelet[2985]: E1216 03:52:36.461660 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.461692 kubelet[2985]: W1216 03:52:36.461685 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.461905 kubelet[2985]: E1216 03:52:36.461701 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.462022 kubelet[2985]: E1216 03:52:36.461990 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.462022 kubelet[2985]: W1216 03:52:36.462014 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.462157 kubelet[2985]: E1216 03:52:36.462030 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.462324 kubelet[2985]: E1216 03:52:36.462285 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.462324 kubelet[2985]: W1216 03:52:36.462316 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.462474 kubelet[2985]: E1216 03:52:36.462331 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.462683 kubelet[2985]: E1216 03:52:36.462650 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.462683 kubelet[2985]: W1216 03:52:36.462674 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.462913 kubelet[2985]: E1216 03:52:36.462690 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.463091 kubelet[2985]: E1216 03:52:36.463059 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.463091 kubelet[2985]: W1216 03:52:36.463084 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.463216 kubelet[2985]: E1216 03:52:36.463101 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.463371 kubelet[2985]: E1216 03:52:36.463349 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.463371 kubelet[2985]: W1216 03:52:36.463369 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.463554 kubelet[2985]: E1216 03:52:36.463384 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.463675 kubelet[2985]: E1216 03:52:36.463655 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.463768 kubelet[2985]: W1216 03:52:36.463678 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.463768 kubelet[2985]: E1216 03:52:36.463694 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.464386 kubelet[2985]: E1216 03:52:36.464365 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.464386 kubelet[2985]: W1216 03:52:36.464384 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.464539 kubelet[2985]: E1216 03:52:36.464399 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.464741 kubelet[2985]: E1216 03:52:36.464682 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.464741 kubelet[2985]: W1216 03:52:36.464705 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.464912 kubelet[2985]: E1216 03:52:36.464751 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.465148 kubelet[2985]: E1216 03:52:36.465116 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.465148 kubelet[2985]: W1216 03:52:36.465141 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.465444 kubelet[2985]: E1216 03:52:36.465157 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.465803 kubelet[2985]: E1216 03:52:36.465678 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.465803 kubelet[2985]: W1216 03:52:36.465740 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.465803 kubelet[2985]: E1216 03:52:36.465761 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.466626 kubelet[2985]: E1216 03:52:36.466603 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:52:36.466626 kubelet[2985]: W1216 03:52:36.466623 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:52:36.466772 kubelet[2985]: E1216 03:52:36.466652 2985 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:52:36.861009 containerd[1640]: time="2025-12-16T03:52:36.860941329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:52:36.862880 containerd[1640]: time="2025-12-16T03:52:36.862845222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 03:52:36.863774 containerd[1640]: time="2025-12-16T03:52:36.863544471Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:52:36.891125 containerd[1640]: time="2025-12-16T03:52:36.890107280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:52:36.891430 containerd[1640]: time="2025-12-16T03:52:36.891391763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.690377668s" Dec 16 03:52:36.891596 containerd[1640]: time="2025-12-16T03:52:36.891566346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 03:52:36.900274 containerd[1640]: time="2025-12-16T03:52:36.900232472Z" level=info msg="CreateContainer within sandbox \"816a5814d707b62b3cba02e420f8e025ac235ef93c23f02b7e2bd4c6de376887\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 03:52:36.925413 containerd[1640]: time="2025-12-16T03:52:36.925346942Z" level=info msg="Container 28dad4d18cd3746b802c381fc56181bc6e8a32752d645c272ae017e600ed3a3b: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:52:36.949910 containerd[1640]: time="2025-12-16T03:52:36.949791023Z" level=info msg="CreateContainer within sandbox \"816a5814d707b62b3cba02e420f8e025ac235ef93c23f02b7e2bd4c6de376887\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"28dad4d18cd3746b802c381fc56181bc6e8a32752d645c272ae017e600ed3a3b\"" Dec 16 03:52:36.951598 containerd[1640]: time="2025-12-16T03:52:36.951549370Z" level=info msg="StartContainer for \"28dad4d18cd3746b802c381fc56181bc6e8a32752d645c272ae017e600ed3a3b\"" Dec 16 03:52:36.956207 containerd[1640]: time="2025-12-16T03:52:36.956023792Z" level=info msg="connecting to shim 28dad4d18cd3746b802c381fc56181bc6e8a32752d645c272ae017e600ed3a3b" address="unix:///run/containerd/s/f62444bc3037e9ebd1bfdcd82e6fd65bceb964606774d7b097bb7f9680d87fa2" protocol=ttrpc version=3 Dec 16 03:52:36.995982 systemd[1]: Started cri-containerd-28dad4d18cd3746b802c381fc56181bc6e8a32752d645c272ae017e600ed3a3b.scope - libcontainer container 28dad4d18cd3746b802c381fc56181bc6e8a32752d645c272ae017e600ed3a3b. Dec 16 03:52:37.071000 audit: BPF prog-id=170 op=LOAD Dec 16 03:52:37.071000 audit[3656]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3513 pid=3656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:37.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238646164346431386364333734366238303263333831666335363138 Dec 16 03:52:37.071000 audit: BPF prog-id=171 op=LOAD Dec 16 03:52:37.071000 audit[3656]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3513 pid=3656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:37.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238646164346431386364333734366238303263333831666335363138 Dec 16 03:52:37.071000 audit: BPF prog-id=171 op=UNLOAD Dec 16 03:52:37.071000 audit[3656]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3513 pid=3656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:37.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238646164346431386364333734366238303263333831666335363138 Dec 16 03:52:37.072000 audit: BPF prog-id=170 op=UNLOAD Dec 16 03:52:37.072000 audit[3656]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3513 pid=3656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:37.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238646164346431386364333734366238303263333831666335363138 Dec 16 03:52:37.072000 audit: BPF prog-id=172 op=LOAD Dec 16 03:52:37.072000 audit[3656]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3513 pid=3656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:37.072000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238646164346431386364333734366238303263333831666335363138 Dec 16 03:52:37.113180 containerd[1640]: time="2025-12-16T03:52:37.112256829Z" level=info msg="StartContainer for \"28dad4d18cd3746b802c381fc56181bc6e8a32752d645c272ae017e600ed3a3b\" returns successfully" Dec 16 03:52:37.132332 systemd[1]: cri-containerd-28dad4d18cd3746b802c381fc56181bc6e8a32752d645c272ae017e600ed3a3b.scope: Deactivated successfully. Dec 16 03:52:37.135000 audit: BPF prog-id=172 op=UNLOAD Dec 16 03:52:37.168196 containerd[1640]: time="2025-12-16T03:52:37.167971622Z" level=info msg="received container exit event container_id:\"28dad4d18cd3746b802c381fc56181bc6e8a32752d645c272ae017e600ed3a3b\" id:\"28dad4d18cd3746b802c381fc56181bc6e8a32752d645c272ae017e600ed3a3b\" pid:3669 exited_at:{seconds:1765857157 nanos:136248558}" Dec 16 03:52:37.210377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28dad4d18cd3746b802c381fc56181bc6e8a32752d645c272ae017e600ed3a3b-rootfs.mount: Deactivated successfully. Dec 16 03:52:37.430540 kubelet[2985]: I1216 03:52:37.430344 2985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:52:38.192793 kubelet[2985]: E1216 03:52:38.192654 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:52:38.438443 containerd[1640]: time="2025-12-16T03:52:38.437966448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 03:52:40.191738 kubelet[2985]: E1216 03:52:40.191156 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:52:42.197331 kubelet[2985]: E1216 03:52:42.197167 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:52:44.234034 kubelet[2985]: E1216 03:52:44.233979 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:52:45.295646 containerd[1640]: time="2025-12-16T03:52:45.295468854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:52:45.298001 containerd[1640]: time="2025-12-16T03:52:45.297602208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 03:52:45.298660 containerd[1640]: time="2025-12-16T03:52:45.298581026Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:52:45.303180 containerd[1640]: time="2025-12-16T03:52:45.303141550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:52:45.305347 containerd[1640]: time="2025-12-16T03:52:45.305271127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.86724724s" Dec 16 03:52:45.305347 containerd[1640]: time="2025-12-16T03:52:45.305326320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 03:52:45.313083 containerd[1640]: time="2025-12-16T03:52:45.312955854Z" level=info msg="CreateContainer within sandbox \"816a5814d707b62b3cba02e420f8e025ac235ef93c23f02b7e2bd4c6de376887\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 03:52:45.328414 containerd[1640]: time="2025-12-16T03:52:45.324696903Z" level=info msg="Container df1728567569cd3be7e76163f6b092e07a79d0315eb04ba4ecd5c246ec800b29: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:52:45.348366 containerd[1640]: time="2025-12-16T03:52:45.348292528Z" level=info msg="CreateContainer within sandbox \"816a5814d707b62b3cba02e420f8e025ac235ef93c23f02b7e2bd4c6de376887\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"df1728567569cd3be7e76163f6b092e07a79d0315eb04ba4ecd5c246ec800b29\"" Dec 16 03:52:45.351624 containerd[1640]: time="2025-12-16T03:52:45.350036195Z" level=info msg="StartContainer for \"df1728567569cd3be7e76163f6b092e07a79d0315eb04ba4ecd5c246ec800b29\"" Dec 16 03:52:45.354156 containerd[1640]: time="2025-12-16T03:52:45.354092608Z" level=info msg="connecting to shim df1728567569cd3be7e76163f6b092e07a79d0315eb04ba4ecd5c246ec800b29" address="unix:///run/containerd/s/f62444bc3037e9ebd1bfdcd82e6fd65bceb964606774d7b097bb7f9680d87fa2" protocol=ttrpc version=3 Dec 16 03:52:45.397089 systemd[1]: Started cri-containerd-df1728567569cd3be7e76163f6b092e07a79d0315eb04ba4ecd5c246ec800b29.scope - libcontainer container df1728567569cd3be7e76163f6b092e07a79d0315eb04ba4ecd5c246ec800b29. Dec 16 03:52:45.496062 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 16 03:52:45.496431 kernel: audit: type=1334 audit(1765857165.488:570): prog-id=173 op=LOAD Dec 16 03:52:45.488000 audit: BPF prog-id=173 op=LOAD Dec 16 03:52:45.488000 audit[3716]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3513 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:45.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466313732383536373536396364336265376537363136336636623039 Dec 16 03:52:45.505231 kernel: audit: type=1300 audit(1765857165.488:570): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3513 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:45.505321 kernel: audit: type=1327 audit(1765857165.488:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466313732383536373536396364336265376537363136336636623039 Dec 16 03:52:45.512822 kernel: audit: type=1334 audit(1765857165.496:571): prog-id=174 op=LOAD Dec 16 03:52:45.496000 audit: BPF prog-id=174 op=LOAD Dec 16 03:52:45.496000 audit[3716]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3513 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:45.519782 kernel: audit: type=1300 audit(1765857165.496:571): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3513 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:45.520270 kernel: audit: type=1327 audit(1765857165.496:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466313732383536373536396364336265376537363136336636623039 Dec 16 03:52:45.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466313732383536373536396364336265376537363136336636623039 Dec 16 03:52:45.496000 audit: BPF prog-id=174 op=UNLOAD Dec 16 03:52:45.526203 kernel: audit: type=1334 audit(1765857165.496:572): prog-id=174 op=UNLOAD Dec 16 03:52:45.528016 kernel: audit: type=1300 audit(1765857165.496:572): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3513 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:45.496000 audit[3716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3513 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:45.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466313732383536373536396364336265376537363136336636623039 Dec 16 03:52:45.535274 kernel: audit: type=1327 audit(1765857165.496:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466313732383536373536396364336265376537363136336636623039 Dec 16 03:52:45.496000 audit: BPF prog-id=173 op=UNLOAD Dec 16 03:52:45.496000 audit[3716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3513 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:45.542132 kernel: audit: type=1334 audit(1765857165.496:573): prog-id=173 op=UNLOAD Dec 16 03:52:45.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466313732383536373536396364336265376537363136336636623039 Dec 16 03:52:45.496000 audit: BPF prog-id=175 op=LOAD Dec 16 03:52:45.496000 audit[3716]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3513 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:45.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466313732383536373536396364336265376537363136336636623039 Dec 16 03:52:45.565506 containerd[1640]: time="2025-12-16T03:52:45.565355432Z" level=info msg="StartContainer for \"df1728567569cd3be7e76163f6b092e07a79d0315eb04ba4ecd5c246ec800b29\" returns successfully" Dec 16 03:52:46.192209 kubelet[2985]: E1216 03:52:46.191957 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:52:46.651545 systemd[1]: cri-containerd-df1728567569cd3be7e76163f6b092e07a79d0315eb04ba4ecd5c246ec800b29.scope: Deactivated successfully. Dec 16 03:52:46.652107 systemd[1]: cri-containerd-df1728567569cd3be7e76163f6b092e07a79d0315eb04ba4ecd5c246ec800b29.scope: Consumed 846ms CPU time, 164.5M memory peak, 6M read from disk, 171.3M written to disk. Dec 16 03:52:46.657000 audit: BPF prog-id=175 op=UNLOAD Dec 16 03:52:46.690399 containerd[1640]: time="2025-12-16T03:52:46.690333479Z" level=info msg="received container exit event container_id:\"df1728567569cd3be7e76163f6b092e07a79d0315eb04ba4ecd5c246ec800b29\" id:\"df1728567569cd3be7e76163f6b092e07a79d0315eb04ba4ecd5c246ec800b29\" pid:3729 exited_at:{seconds:1765857166 nanos:680862715}" Dec 16 03:52:46.721840 kubelet[2985]: I1216 03:52:46.721796 2985 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 03:52:46.792599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df1728567569cd3be7e76163f6b092e07a79d0315eb04ba4ecd5c246ec800b29-rootfs.mount: Deactivated successfully. Dec 16 03:52:46.979638 systemd[1]: Created slice kubepods-burstable-podac3cb9e0_75ec_4604_aa8a_c0773489c9c8.slice - libcontainer container kubepods-burstable-podac3cb9e0_75ec_4604_aa8a_c0773489c9c8.slice. Dec 16 03:52:47.007821 systemd[1]: Created slice kubepods-besteffort-pod050887c2_252c_4b73_aa88_7211b3356790.slice - libcontainer container kubepods-besteffort-pod050887c2_252c_4b73_aa88_7211b3356790.slice. Dec 16 03:52:47.024642 systemd[1]: Created slice kubepods-besteffort-pod84a83236_0b05_4630_b32a_21eabf997946.slice - libcontainer container kubepods-besteffort-pod84a83236_0b05_4630_b32a_21eabf997946.slice. Dec 16 03:52:47.038300 systemd[1]: Created slice kubepods-besteffort-podbd783b73_c05d_44ca_8c17_099b3c38bb45.slice - libcontainer container kubepods-besteffort-podbd783b73_c05d_44ca_8c17_099b3c38bb45.slice. Dec 16 03:52:47.048429 kubelet[2985]: I1216 03:52:47.047705 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/050887c2-252c-4b73-aa88-7211b3356790-calico-apiserver-certs\") pod \"calico-apiserver-64754b8886-qt5xx\" (UID: \"050887c2-252c-4b73-aa88-7211b3356790\") " pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" Dec 16 03:52:47.048429 kubelet[2985]: I1216 03:52:47.047781 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psz9h\" (UniqueName: \"kubernetes.io/projected/050887c2-252c-4b73-aa88-7211b3356790-kube-api-access-psz9h\") pod \"calico-apiserver-64754b8886-qt5xx\" (UID: \"050887c2-252c-4b73-aa88-7211b3356790\") " pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" Dec 16 03:52:47.048429 kubelet[2985]: I1216 03:52:47.047815 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac3cb9e0-75ec-4604-aa8a-c0773489c9c8-config-volume\") pod \"coredns-674b8bbfcf-jqv5z\" (UID: \"ac3cb9e0-75ec-4604-aa8a-c0773489c9c8\") " pod="kube-system/coredns-674b8bbfcf-jqv5z" Dec 16 03:52:47.048429 kubelet[2985]: I1216 03:52:47.047847 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f69ws\" (UniqueName: \"kubernetes.io/projected/2a73cfff-498a-4801-a9d9-f3b2fd31770b-kube-api-access-f69ws\") pod \"coredns-674b8bbfcf-xjdhd\" (UID: \"2a73cfff-498a-4801-a9d9-f3b2fd31770b\") " pod="kube-system/coredns-674b8bbfcf-xjdhd" Dec 16 03:52:47.048429 kubelet[2985]: I1216 03:52:47.047874 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gqmv\" (UniqueName: \"kubernetes.io/projected/ac3cb9e0-75ec-4604-aa8a-c0773489c9c8-kube-api-access-9gqmv\") pod \"coredns-674b8bbfcf-jqv5z\" (UID: \"ac3cb9e0-75ec-4604-aa8a-c0773489c9c8\") " pod="kube-system/coredns-674b8bbfcf-jqv5z" Dec 16 03:52:47.050097 kubelet[2985]: I1216 03:52:47.047913 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l2pk\" (UniqueName: \"kubernetes.io/projected/84a83236-0b05-4630-b32a-21eabf997946-kube-api-access-7l2pk\") pod \"calico-apiserver-64754b8886-wksvf\" (UID: \"84a83236-0b05-4630-b32a-21eabf997946\") " pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" Dec 16 03:52:47.050097 kubelet[2985]: I1216 03:52:47.047960 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd783b73-c05d-44ca-8c17-099b3c38bb45-whisker-backend-key-pair\") pod \"whisker-d6c88684d-r9b2s\" (UID: \"bd783b73-c05d-44ca-8c17-099b3c38bb45\") " pod="calico-system/whisker-d6c88684d-r9b2s" Dec 16 03:52:47.050097 kubelet[2985]: I1216 03:52:47.047989 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd783b73-c05d-44ca-8c17-099b3c38bb45-whisker-ca-bundle\") pod \"whisker-d6c88684d-r9b2s\" (UID: \"bd783b73-c05d-44ca-8c17-099b3c38bb45\") " pod="calico-system/whisker-d6c88684d-r9b2s" Dec 16 03:52:47.050097 kubelet[2985]: I1216 03:52:47.048025 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a73cfff-498a-4801-a9d9-f3b2fd31770b-config-volume\") pod \"coredns-674b8bbfcf-xjdhd\" (UID: \"2a73cfff-498a-4801-a9d9-f3b2fd31770b\") " pod="kube-system/coredns-674b8bbfcf-xjdhd" Dec 16 03:52:47.050097 kubelet[2985]: I1216 03:52:47.048051 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d650a02a-f60f-480c-a3b2-4b144ff8489f-goldmane-ca-bundle\") pod \"goldmane-666569f655-ngtdd\" (UID: \"d650a02a-f60f-480c-a3b2-4b144ff8489f\") " pod="calico-system/goldmane-666569f655-ngtdd" Dec 16 03:52:47.050376 kubelet[2985]: I1216 03:52:47.048096 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtpzd\" (UniqueName: \"kubernetes.io/projected/bd783b73-c05d-44ca-8c17-099b3c38bb45-kube-api-access-rtpzd\") pod \"whisker-d6c88684d-r9b2s\" (UID: \"bd783b73-c05d-44ca-8c17-099b3c38bb45\") " pod="calico-system/whisker-d6c88684d-r9b2s" Dec 16 03:52:47.050376 kubelet[2985]: I1216 03:52:47.048124 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84a83236-0b05-4630-b32a-21eabf997946-calico-apiserver-certs\") pod \"calico-apiserver-64754b8886-wksvf\" (UID: \"84a83236-0b05-4630-b32a-21eabf997946\") " pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" Dec 16 03:52:47.050376 kubelet[2985]: I1216 03:52:47.048202 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d650a02a-f60f-480c-a3b2-4b144ff8489f-goldmane-key-pair\") pod \"goldmane-666569f655-ngtdd\" (UID: \"d650a02a-f60f-480c-a3b2-4b144ff8489f\") " pod="calico-system/goldmane-666569f655-ngtdd" Dec 16 03:52:47.050376 kubelet[2985]: I1216 03:52:47.048260 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nstgv\" (UniqueName: \"kubernetes.io/projected/d650a02a-f60f-480c-a3b2-4b144ff8489f-kube-api-access-nstgv\") pod \"goldmane-666569f655-ngtdd\" (UID: \"d650a02a-f60f-480c-a3b2-4b144ff8489f\") " pod="calico-system/goldmane-666569f655-ngtdd" Dec 16 03:52:47.050376 kubelet[2985]: I1216 03:52:47.048371 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkjc8\" (UniqueName: \"kubernetes.io/projected/ada9a6ed-985f-4b6f-99d7-ef5caf58dd85-kube-api-access-hkjc8\") pod \"calico-kube-controllers-76cd974fc5-ks52b\" (UID: \"ada9a6ed-985f-4b6f-99d7-ef5caf58dd85\") " pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" Dec 16 03:52:47.050640 kubelet[2985]: I1216 03:52:47.048407 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d650a02a-f60f-480c-a3b2-4b144ff8489f-config\") pod \"goldmane-666569f655-ngtdd\" (UID: \"d650a02a-f60f-480c-a3b2-4b144ff8489f\") " pod="calico-system/goldmane-666569f655-ngtdd" Dec 16 03:52:47.050640 kubelet[2985]: I1216 03:52:47.048472 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ada9a6ed-985f-4b6f-99d7-ef5caf58dd85-tigera-ca-bundle\") pod \"calico-kube-controllers-76cd974fc5-ks52b\" (UID: \"ada9a6ed-985f-4b6f-99d7-ef5caf58dd85\") " pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" Dec 16 03:52:47.051118 systemd[1]: Created slice kubepods-besteffort-podd650a02a_f60f_480c_a3b2_4b144ff8489f.slice - libcontainer container kubepods-besteffort-podd650a02a_f60f_480c_a3b2_4b144ff8489f.slice. Dec 16 03:52:47.068066 systemd[1]: Created slice kubepods-burstable-pod2a73cfff_498a_4801_a9d9_f3b2fd31770b.slice - libcontainer container kubepods-burstable-pod2a73cfff_498a_4801_a9d9_f3b2fd31770b.slice. Dec 16 03:52:47.081221 systemd[1]: Created slice kubepods-besteffort-podada9a6ed_985f_4b6f_99d7_ef5caf58dd85.slice - libcontainer container kubepods-besteffort-podada9a6ed_985f_4b6f_99d7_ef5caf58dd85.slice. Dec 16 03:52:47.306566 containerd[1640]: time="2025-12-16T03:52:47.306274083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqv5z,Uid:ac3cb9e0-75ec-4604-aa8a-c0773489c9c8,Namespace:kube-system,Attempt:0,}" Dec 16 03:52:47.314702 containerd[1640]: time="2025-12-16T03:52:47.314644327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64754b8886-qt5xx,Uid:050887c2-252c-4b73-aa88-7211b3356790,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:52:47.346526 containerd[1640]: time="2025-12-16T03:52:47.346299805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d6c88684d-r9b2s,Uid:bd783b73-c05d-44ca-8c17-099b3c38bb45,Namespace:calico-system,Attempt:0,}" Dec 16 03:52:47.365232 containerd[1640]: time="2025-12-16T03:52:47.365159849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64754b8886-wksvf,Uid:84a83236-0b05-4630-b32a-21eabf997946,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:52:47.366573 containerd[1640]: time="2025-12-16T03:52:47.366300085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ngtdd,Uid:d650a02a-f60f-480c-a3b2-4b144ff8489f,Namespace:calico-system,Attempt:0,}" Dec 16 03:52:47.382737 containerd[1640]: time="2025-12-16T03:52:47.382664707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xjdhd,Uid:2a73cfff-498a-4801-a9d9-f3b2fd31770b,Namespace:kube-system,Attempt:0,}" Dec 16 03:52:47.386627 containerd[1640]: time="2025-12-16T03:52:47.386591503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76cd974fc5-ks52b,Uid:ada9a6ed-985f-4b6f-99d7-ef5caf58dd85,Namespace:calico-system,Attempt:0,}" Dec 16 03:52:47.520560 containerd[1640]: time="2025-12-16T03:52:47.519158753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 03:52:47.769299 containerd[1640]: time="2025-12-16T03:52:47.769063839Z" level=error msg="Failed to destroy network for sandbox \"bd4ff9652827a456ce75bd1e9c6902eaf08fc22b8b5e2f47125cfb0f4896a2b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.775741 containerd[1640]: time="2025-12-16T03:52:47.775369392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64754b8886-qt5xx,Uid:050887c2-252c-4b73-aa88-7211b3356790,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd4ff9652827a456ce75bd1e9c6902eaf08fc22b8b5e2f47125cfb0f4896a2b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.777778 kubelet[2985]: E1216 03:52:47.777654 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd4ff9652827a456ce75bd1e9c6902eaf08fc22b8b5e2f47125cfb0f4896a2b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.780133 kubelet[2985]: E1216 03:52:47.777822 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd4ff9652827a456ce75bd1e9c6902eaf08fc22b8b5e2f47125cfb0f4896a2b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" Dec 16 03:52:47.780133 kubelet[2985]: E1216 03:52:47.777874 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd4ff9652827a456ce75bd1e9c6902eaf08fc22b8b5e2f47125cfb0f4896a2b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" Dec 16 03:52:47.780133 kubelet[2985]: E1216 03:52:47.777963 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64754b8886-qt5xx_calico-apiserver(050887c2-252c-4b73-aa88-7211b3356790)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64754b8886-qt5xx_calico-apiserver(050887c2-252c-4b73-aa88-7211b3356790)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd4ff9652827a456ce75bd1e9c6902eaf08fc22b8b5e2f47125cfb0f4896a2b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" podUID="050887c2-252c-4b73-aa88-7211b3356790" Dec 16 03:52:47.820788 containerd[1640]: time="2025-12-16T03:52:47.820386040Z" level=error msg="Failed to destroy network for sandbox \"870fee1d895e45e752d6c4b0355b177868249111f68571cef41dbbc5bdc431bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.824559 systemd[1]: run-netns-cni\x2d96a3fac4\x2d965d\x2d97aa\x2d20f3\x2debf6f65c432a.mount: Deactivated successfully. Dec 16 03:52:47.828675 containerd[1640]: time="2025-12-16T03:52:47.828557878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xjdhd,Uid:2a73cfff-498a-4801-a9d9-f3b2fd31770b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"870fee1d895e45e752d6c4b0355b177868249111f68571cef41dbbc5bdc431bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.831623 kubelet[2985]: E1216 03:52:47.829821 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870fee1d895e45e752d6c4b0355b177868249111f68571cef41dbbc5bdc431bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.831623 kubelet[2985]: E1216 03:52:47.829935 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870fee1d895e45e752d6c4b0355b177868249111f68571cef41dbbc5bdc431bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xjdhd" Dec 16 03:52:47.831623 kubelet[2985]: E1216 03:52:47.829990 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870fee1d895e45e752d6c4b0355b177868249111f68571cef41dbbc5bdc431bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xjdhd" Dec 16 03:52:47.831972 kubelet[2985]: E1216 03:52:47.830094 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xjdhd_kube-system(2a73cfff-498a-4801-a9d9-f3b2fd31770b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xjdhd_kube-system(2a73cfff-498a-4801-a9d9-f3b2fd31770b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"870fee1d895e45e752d6c4b0355b177868249111f68571cef41dbbc5bdc431bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xjdhd" podUID="2a73cfff-498a-4801-a9d9-f3b2fd31770b" Dec 16 03:52:47.835312 containerd[1640]: time="2025-12-16T03:52:47.835167587Z" level=error msg="Failed to destroy network for sandbox \"d4611955903074fa14b78daff62369bb1bf2663c7079d680a1705318be10044d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.839390 systemd[1]: run-netns-cni\x2d6594bf71\x2d974e\x2df82e\x2dd9dc\x2dfdd54fa97b99.mount: Deactivated successfully. Dec 16 03:52:47.844352 containerd[1640]: time="2025-12-16T03:52:47.844295247Z" level=error msg="Failed to destroy network for sandbox \"b341373d02dedd6f5ef8c8336170b496e16af76c8dff8c5667d8120da184148d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.846662 containerd[1640]: time="2025-12-16T03:52:47.846623829Z" level=error msg="Failed to destroy network for sandbox \"bac899ed9e121b863a2f48135552cafafd0d364e0492cf120b3f18b5f24ae250\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.848748 systemd[1]: run-netns-cni\x2d0d36f8de\x2d179a\x2d9d01\x2de27a\x2d527586d50745.mount: Deactivated successfully. Dec 16 03:52:47.852008 containerd[1640]: time="2025-12-16T03:52:47.851163124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ngtdd,Uid:d650a02a-f60f-480c-a3b2-4b144ff8489f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4611955903074fa14b78daff62369bb1bf2663c7079d680a1705318be10044d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.853752 kubelet[2985]: E1216 03:52:47.853342 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4611955903074fa14b78daff62369bb1bf2663c7079d680a1705318be10044d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.853444 systemd[1]: run-netns-cni\x2d9793eb00\x2d4ceb\x2de8a9\x2de5cd\x2d2473cb898514.mount: Deactivated successfully. Dec 16 03:52:47.857736 kubelet[2985]: E1216 03:52:47.855024 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4611955903074fa14b78daff62369bb1bf2663c7079d680a1705318be10044d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ngtdd" Dec 16 03:52:47.857736 kubelet[2985]: E1216 03:52:47.855098 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4611955903074fa14b78daff62369bb1bf2663c7079d680a1705318be10044d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ngtdd" Dec 16 03:52:47.857736 kubelet[2985]: E1216 03:52:47.855224 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-ngtdd_calico-system(d650a02a-f60f-480c-a3b2-4b144ff8489f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-ngtdd_calico-system(d650a02a-f60f-480c-a3b2-4b144ff8489f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4611955903074fa14b78daff62369bb1bf2663c7079d680a1705318be10044d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ngtdd" podUID="d650a02a-f60f-480c-a3b2-4b144ff8489f" Dec 16 03:52:47.857984 containerd[1640]: time="2025-12-16T03:52:47.856732654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64754b8886-wksvf,Uid:84a83236-0b05-4630-b32a-21eabf997946,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac899ed9e121b863a2f48135552cafafd0d364e0492cf120b3f18b5f24ae250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.862024 kubelet[2985]: E1216 03:52:47.860694 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac899ed9e121b863a2f48135552cafafd0d364e0492cf120b3f18b5f24ae250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.862024 kubelet[2985]: E1216 03:52:47.861823 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac899ed9e121b863a2f48135552cafafd0d364e0492cf120b3f18b5f24ae250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" Dec 16 03:52:47.862024 kubelet[2985]: E1216 03:52:47.861862 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac899ed9e121b863a2f48135552cafafd0d364e0492cf120b3f18b5f24ae250\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" Dec 16 03:52:47.862397 kubelet[2985]: E1216 03:52:47.861953 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64754b8886-wksvf_calico-apiserver(84a83236-0b05-4630-b32a-21eabf997946)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64754b8886-wksvf_calico-apiserver(84a83236-0b05-4630-b32a-21eabf997946)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bac899ed9e121b863a2f48135552cafafd0d364e0492cf120b3f18b5f24ae250\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" podUID="84a83236-0b05-4630-b32a-21eabf997946" Dec 16 03:52:47.868432 containerd[1640]: time="2025-12-16T03:52:47.867754890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d6c88684d-r9b2s,Uid:bd783b73-c05d-44ca-8c17-099b3c38bb45,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b341373d02dedd6f5ef8c8336170b496e16af76c8dff8c5667d8120da184148d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.868600 kubelet[2985]: E1216 03:52:47.868045 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b341373d02dedd6f5ef8c8336170b496e16af76c8dff8c5667d8120da184148d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.868600 kubelet[2985]: E1216 03:52:47.868160 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b341373d02dedd6f5ef8c8336170b496e16af76c8dff8c5667d8120da184148d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d6c88684d-r9b2s" Dec 16 03:52:47.868600 kubelet[2985]: E1216 03:52:47.868203 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b341373d02dedd6f5ef8c8336170b496e16af76c8dff8c5667d8120da184148d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d6c88684d-r9b2s" Dec 16 03:52:47.869244 kubelet[2985]: E1216 03:52:47.868909 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d6c88684d-r9b2s_calico-system(bd783b73-c05d-44ca-8c17-099b3c38bb45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d6c88684d-r9b2s_calico-system(bd783b73-c05d-44ca-8c17-099b3c38bb45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b341373d02dedd6f5ef8c8336170b496e16af76c8dff8c5667d8120da184148d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d6c88684d-r9b2s" podUID="bd783b73-c05d-44ca-8c17-099b3c38bb45" Dec 16 03:52:47.870240 containerd[1640]: time="2025-12-16T03:52:47.870137163Z" level=error msg="Failed to destroy network for sandbox \"d98a9ef23c6e92c7a14fb8f7fe20792de9cf9be3a737b013a23f799faa3a941d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.873039 containerd[1640]: time="2025-12-16T03:52:47.872861293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76cd974fc5-ks52b,Uid:ada9a6ed-985f-4b6f-99d7-ef5caf58dd85,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d98a9ef23c6e92c7a14fb8f7fe20792de9cf9be3a737b013a23f799faa3a941d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.874673 containerd[1640]: time="2025-12-16T03:52:47.873850661Z" level=error msg="Failed to destroy network for sandbox \"434fe73d30c8e82111728416289777c9af013604576d9df7f28a129e33497db6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.874804 kubelet[2985]: E1216 03:52:47.874047 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d98a9ef23c6e92c7a14fb8f7fe20792de9cf9be3a737b013a23f799faa3a941d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.874804 kubelet[2985]: E1216 03:52:47.874144 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d98a9ef23c6e92c7a14fb8f7fe20792de9cf9be3a737b013a23f799faa3a941d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" Dec 16 03:52:47.874804 kubelet[2985]: E1216 03:52:47.874227 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d98a9ef23c6e92c7a14fb8f7fe20792de9cf9be3a737b013a23f799faa3a941d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" Dec 16 03:52:47.874968 kubelet[2985]: E1216 03:52:47.874453 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76cd974fc5-ks52b_calico-system(ada9a6ed-985f-4b6f-99d7-ef5caf58dd85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76cd974fc5-ks52b_calico-system(ada9a6ed-985f-4b6f-99d7-ef5caf58dd85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d98a9ef23c6e92c7a14fb8f7fe20792de9cf9be3a737b013a23f799faa3a941d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" podUID="ada9a6ed-985f-4b6f-99d7-ef5caf58dd85" Dec 16 03:52:47.878759 containerd[1640]: time="2025-12-16T03:52:47.878485795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqv5z,Uid:ac3cb9e0-75ec-4604-aa8a-c0773489c9c8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"434fe73d30c8e82111728416289777c9af013604576d9df7f28a129e33497db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.879006 kubelet[2985]: E1216 03:52:47.878887 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"434fe73d30c8e82111728416289777c9af013604576d9df7f28a129e33497db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:47.879006 kubelet[2985]: E1216 03:52:47.878936 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"434fe73d30c8e82111728416289777c9af013604576d9df7f28a129e33497db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jqv5z" Dec 16 03:52:47.879006 kubelet[2985]: E1216 03:52:47.878962 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"434fe73d30c8e82111728416289777c9af013604576d9df7f28a129e33497db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jqv5z" Dec 16 03:52:47.879293 kubelet[2985]: E1216 03:52:47.879018 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jqv5z_kube-system(ac3cb9e0-75ec-4604-aa8a-c0773489c9c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jqv5z_kube-system(ac3cb9e0-75ec-4604-aa8a-c0773489c9c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"434fe73d30c8e82111728416289777c9af013604576d9df7f28a129e33497db6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jqv5z" podUID="ac3cb9e0-75ec-4604-aa8a-c0773489c9c8" Dec 16 03:52:48.208397 systemd[1]: Created slice kubepods-besteffort-pod68135f19_2992_417d_b99b_f5dddddbe1d3.slice - libcontainer container kubepods-besteffort-pod68135f19_2992_417d_b99b_f5dddddbe1d3.slice. Dec 16 03:52:48.214514 containerd[1640]: time="2025-12-16T03:52:48.214451373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4px76,Uid:68135f19-2992-417d-b99b-f5dddddbe1d3,Namespace:calico-system,Attempt:0,}" Dec 16 03:52:48.309759 containerd[1640]: time="2025-12-16T03:52:48.309644472Z" level=error msg="Failed to destroy network for sandbox \"bf1d72cbcb9b5d8f2f6e98c4c18b9d4df69b137a10f1a97653dffe2829b26ea6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:48.313739 containerd[1640]: time="2025-12-16T03:52:48.313666204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4px76,Uid:68135f19-2992-417d-b99b-f5dddddbe1d3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf1d72cbcb9b5d8f2f6e98c4c18b9d4df69b137a10f1a97653dffe2829b26ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:48.314489 kubelet[2985]: E1216 03:52:48.314394 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf1d72cbcb9b5d8f2f6e98c4c18b9d4df69b137a10f1a97653dffe2829b26ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:48.314597 kubelet[2985]: E1216 03:52:48.314541 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf1d72cbcb9b5d8f2f6e98c4c18b9d4df69b137a10f1a97653dffe2829b26ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4px76" Dec 16 03:52:48.314669 kubelet[2985]: E1216 03:52:48.314598 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf1d72cbcb9b5d8f2f6e98c4c18b9d4df69b137a10f1a97653dffe2829b26ea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4px76" Dec 16 03:52:48.315827 kubelet[2985]: E1216 03:52:48.315237 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4px76_calico-system(68135f19-2992-417d-b99b-f5dddddbe1d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4px76_calico-system(68135f19-2992-417d-b99b-f5dddddbe1d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf1d72cbcb9b5d8f2f6e98c4c18b9d4df69b137a10f1a97653dffe2829b26ea6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:52:48.792184 systemd[1]: run-netns-cni\x2d6acb8707\x2db99f\x2da4bb\x2d41ce\x2d4900b38a88c5.mount: Deactivated successfully. Dec 16 03:52:48.793243 systemd[1]: run-netns-cni\x2d032c66c8\x2d1858\x2d1b05\x2dbd23\x2d7a48f96ca1bf.mount: Deactivated successfully. Dec 16 03:52:53.798353 kubelet[2985]: I1216 03:52:53.798138 2985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:52:53.918000 audit[3985]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3985 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:53.925709 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 03:52:53.926015 kernel: audit: type=1325 audit(1765857173.918:576): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3985 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:53.918000 audit[3985]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe1825eeb0 a2=0 a3=7ffe1825ee9c items=0 ppid=3138 pid=3985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:53.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:53.936233 kernel: audit: type=1300 audit(1765857173.918:576): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe1825eeb0 a2=0 a3=7ffe1825ee9c items=0 ppid=3138 pid=3985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:53.936312 kernel: audit: type=1327 audit(1765857173.918:576): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:53.927000 audit[3985]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3985 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:53.939582 kernel: audit: type=1325 audit(1765857173.927:577): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3985 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:52:53.927000 audit[3985]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe1825eeb0 a2=0 a3=7ffe1825ee9c items=0 ppid=3138 pid=3985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:53.950600 kernel: audit: type=1300 audit(1765857173.927:577): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe1825eeb0 a2=0 a3=7ffe1825ee9c items=0 ppid=3138 pid=3985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:52:53.927000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:53.958751 kernel: audit: type=1327 audit(1765857173.927:577): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:52:58.194136 containerd[1640]: time="2025-12-16T03:52:58.194038358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xjdhd,Uid:2a73cfff-498a-4801-a9d9-f3b2fd31770b,Namespace:kube-system,Attempt:0,}" Dec 16 03:52:58.363758 containerd[1640]: time="2025-12-16T03:52:58.361403241Z" level=error msg="Failed to destroy network for sandbox \"21c76602b39bdd56ba69a0272ef2d61b83c92fc24c66def3247b25d60d47fa6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:58.365980 systemd[1]: run-netns-cni\x2d7faa8ca7\x2d6650\x2dfd81\x2dc535\x2d1fe5720eb9c5.mount: Deactivated successfully. Dec 16 03:52:58.369511 containerd[1640]: time="2025-12-16T03:52:58.369457807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xjdhd,Uid:2a73cfff-498a-4801-a9d9-f3b2fd31770b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21c76602b39bdd56ba69a0272ef2d61b83c92fc24c66def3247b25d60d47fa6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:58.370235 kubelet[2985]: E1216 03:52:58.370167 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21c76602b39bdd56ba69a0272ef2d61b83c92fc24c66def3247b25d60d47fa6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:58.371436 kubelet[2985]: E1216 03:52:58.370789 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21c76602b39bdd56ba69a0272ef2d61b83c92fc24c66def3247b25d60d47fa6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xjdhd" Dec 16 03:52:58.371436 kubelet[2985]: E1216 03:52:58.370848 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21c76602b39bdd56ba69a0272ef2d61b83c92fc24c66def3247b25d60d47fa6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xjdhd" Dec 16 03:52:58.371755 kubelet[2985]: E1216 03:52:58.371123 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xjdhd_kube-system(2a73cfff-498a-4801-a9d9-f3b2fd31770b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xjdhd_kube-system(2a73cfff-498a-4801-a9d9-f3b2fd31770b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21c76602b39bdd56ba69a0272ef2d61b83c92fc24c66def3247b25d60d47fa6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xjdhd" podUID="2a73cfff-498a-4801-a9d9-f3b2fd31770b" Dec 16 03:52:59.191868 containerd[1640]: time="2025-12-16T03:52:59.191612466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqv5z,Uid:ac3cb9e0-75ec-4604-aa8a-c0773489c9c8,Namespace:kube-system,Attempt:0,}" Dec 16 03:52:59.206039 containerd[1640]: time="2025-12-16T03:52:59.205994999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76cd974fc5-ks52b,Uid:ada9a6ed-985f-4b6f-99d7-ef5caf58dd85,Namespace:calico-system,Attempt:0,}" Dec 16 03:52:59.369038 containerd[1640]: time="2025-12-16T03:52:59.368870960Z" level=error msg="Failed to destroy network for sandbox \"b8534c586ac65896379cad29d439c05e4bac7e74252668014241d2988724f99e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:59.374478 containerd[1640]: time="2025-12-16T03:52:59.374257805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqv5z,Uid:ac3cb9e0-75ec-4604-aa8a-c0773489c9c8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8534c586ac65896379cad29d439c05e4bac7e74252668014241d2988724f99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:59.375341 systemd[1]: run-netns-cni\x2db70b6098\x2d7a62\x2d1805\x2d2a6a\x2d40014b3a0cd8.mount: Deactivated successfully. Dec 16 03:52:59.377003 kubelet[2985]: E1216 03:52:59.375367 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8534c586ac65896379cad29d439c05e4bac7e74252668014241d2988724f99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:59.377003 kubelet[2985]: E1216 03:52:59.375429 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8534c586ac65896379cad29d439c05e4bac7e74252668014241d2988724f99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jqv5z" Dec 16 03:52:59.377003 kubelet[2985]: E1216 03:52:59.375459 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8534c586ac65896379cad29d439c05e4bac7e74252668014241d2988724f99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jqv5z" Dec 16 03:52:59.380573 kubelet[2985]: E1216 03:52:59.375524 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jqv5z_kube-system(ac3cb9e0-75ec-4604-aa8a-c0773489c9c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jqv5z_kube-system(ac3cb9e0-75ec-4604-aa8a-c0773489c9c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8534c586ac65896379cad29d439c05e4bac7e74252668014241d2988724f99e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jqv5z" podUID="ac3cb9e0-75ec-4604-aa8a-c0773489c9c8" Dec 16 03:52:59.396098 containerd[1640]: time="2025-12-16T03:52:59.396050863Z" level=error msg="Failed to destroy network for sandbox \"8e534187fe7063920b5a9b930d129e3ffc18756e28f2a24f50070a2422729cc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:59.400478 containerd[1640]: time="2025-12-16T03:52:59.400419002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76cd974fc5-ks52b,Uid:ada9a6ed-985f-4b6f-99d7-ef5caf58dd85,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e534187fe7063920b5a9b930d129e3ffc18756e28f2a24f50070a2422729cc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:59.401252 kubelet[2985]: E1216 03:52:59.401209 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e534187fe7063920b5a9b930d129e3ffc18756e28f2a24f50070a2422729cc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:52:59.402360 systemd[1]: run-netns-cni\x2dbb17d7cc\x2d382b\x2dcec0\x2dfd15\x2dc75c8af704ad.mount: Deactivated successfully. Dec 16 03:52:59.403095 kubelet[2985]: E1216 03:52:59.402442 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e534187fe7063920b5a9b930d129e3ffc18756e28f2a24f50070a2422729cc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" Dec 16 03:52:59.403095 kubelet[2985]: E1216 03:52:59.402489 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e534187fe7063920b5a9b930d129e3ffc18756e28f2a24f50070a2422729cc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" Dec 16 03:52:59.403095 kubelet[2985]: E1216 03:52:59.402583 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76cd974fc5-ks52b_calico-system(ada9a6ed-985f-4b6f-99d7-ef5caf58dd85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76cd974fc5-ks52b_calico-system(ada9a6ed-985f-4b6f-99d7-ef5caf58dd85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e534187fe7063920b5a9b930d129e3ffc18756e28f2a24f50070a2422729cc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" podUID="ada9a6ed-985f-4b6f-99d7-ef5caf58dd85" Dec 16 03:53:01.243735 containerd[1640]: time="2025-12-16T03:53:01.243040459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64754b8886-wksvf,Uid:84a83236-0b05-4630-b32a-21eabf997946,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:53:01.276463 containerd[1640]: time="2025-12-16T03:53:01.276385517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4px76,Uid:68135f19-2992-417d-b99b-f5dddddbe1d3,Namespace:calico-system,Attempt:0,}" Dec 16 03:53:01.292510 containerd[1640]: time="2025-12-16T03:53:01.292461959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ngtdd,Uid:d650a02a-f60f-480c-a3b2-4b144ff8489f,Namespace:calico-system,Attempt:0,}" Dec 16 03:53:01.298294 containerd[1640]: time="2025-12-16T03:53:01.297985684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64754b8886-qt5xx,Uid:050887c2-252c-4b73-aa88-7211b3356790,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:53:01.539007 containerd[1640]: time="2025-12-16T03:53:01.538890880Z" level=error msg="Failed to destroy network for sandbox \"9c298e1fded6adc75cdd1db94f1fad0737e5cb63271dad80ca3f72a71db502f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:53:01.545831 systemd[1]: run-netns-cni\x2d3d62f78e\x2dab6d\x2d4328\x2d4ab7\x2df69144f129bd.mount: Deactivated successfully. Dec 16 03:53:01.550325 containerd[1640]: time="2025-12-16T03:53:01.550231990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64754b8886-wksvf,Uid:84a83236-0b05-4630-b32a-21eabf997946,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c298e1fded6adc75cdd1db94f1fad0737e5cb63271dad80ca3f72a71db502f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:53:01.550779 containerd[1640]: time="2025-12-16T03:53:01.550451550Z" level=error msg="Failed to destroy network for sandbox \"7c7eab41494a9e4f54c170f08580927e661b342542d29176da310726a62d043d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:53:01.554227 kubelet[2985]: E1216 03:53:01.554175 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c298e1fded6adc75cdd1db94f1fad0737e5cb63271dad80ca3f72a71db502f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:53:01.555451 kubelet[2985]: E1216 03:53:01.555257 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c298e1fded6adc75cdd1db94f1fad0737e5cb63271dad80ca3f72a71db502f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" Dec 16 03:53:01.555890 kubelet[2985]: E1216 03:53:01.555483 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c298e1fded6adc75cdd1db94f1fad0737e5cb63271dad80ca3f72a71db502f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" Dec 16 03:53:01.555890 kubelet[2985]: E1216 03:53:01.555598 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64754b8886-wksvf_calico-apiserver(84a83236-0b05-4630-b32a-21eabf997946)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64754b8886-wksvf_calico-apiserver(84a83236-0b05-4630-b32a-21eabf997946)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c298e1fded6adc75cdd1db94f1fad0737e5cb63271dad80ca3f72a71db502f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" podUID="84a83236-0b05-4630-b32a-21eabf997946" Dec 16 03:53:01.556797 containerd[1640]: time="2025-12-16T03:53:01.556438502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64754b8886-qt5xx,Uid:050887c2-252c-4b73-aa88-7211b3356790,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c7eab41494a9e4f54c170f08580927e661b342542d29176da310726a62d043d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:53:01.557281 kubelet[2985]: E1216 03:53:01.557246 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c7eab41494a9e4f54c170f08580927e661b342542d29176da310726a62d043d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:53:01.557703 kubelet[2985]: E1216 03:53:01.557523 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c7eab41494a9e4f54c170f08580927e661b342542d29176da310726a62d043d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" Dec 16 03:53:01.557703 kubelet[2985]: E1216 03:53:01.557582 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c7eab41494a9e4f54c170f08580927e661b342542d29176da310726a62d043d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" Dec 16 03:53:01.557703 kubelet[2985]: E1216 03:53:01.557640 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64754b8886-qt5xx_calico-apiserver(050887c2-252c-4b73-aa88-7211b3356790)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64754b8886-qt5xx_calico-apiserver(050887c2-252c-4b73-aa88-7211b3356790)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c7eab41494a9e4f54c170f08580927e661b342542d29176da310726a62d043d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" podUID="050887c2-252c-4b73-aa88-7211b3356790" Dec 16 03:53:01.601595 containerd[1640]: time="2025-12-16T03:53:01.601528352Z" level=error msg="Failed to destroy network for sandbox \"af4b15e02e4088e8f0ff5d4205b13ad56d05a0b29c76e34432c5195d30f73e01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:53:01.604785 containerd[1640]: time="2025-12-16T03:53:01.604648930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4px76,Uid:68135f19-2992-417d-b99b-f5dddddbe1d3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4b15e02e4088e8f0ff5d4205b13ad56d05a0b29c76e34432c5195d30f73e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:53:01.605445 containerd[1640]: time="2025-12-16T03:53:01.601528373Z" level=error msg="Failed to destroy network for sandbox \"b88b33ca8fc448b1b6282d6be0e83a71ec9db6087efac0838e9527b031f55320\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:53:01.608319 kubelet[2985]: E1216 03:53:01.608269 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4b15e02e4088e8f0ff5d4205b13ad56d05a0b29c76e34432c5195d30f73e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:53:01.608462 kubelet[2985]: E1216 03:53:01.608362 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4b15e02e4088e8f0ff5d4205b13ad56d05a0b29c76e34432c5195d30f73e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4px76" Dec 16 03:53:01.608462 kubelet[2985]: E1216 03:53:01.608394 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af4b15e02e4088e8f0ff5d4205b13ad56d05a0b29c76e34432c5195d30f73e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4px76" Dec 16 03:53:01.608888 kubelet[2985]: E1216 03:53:01.608565 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4px76_calico-system(68135f19-2992-417d-b99b-f5dddddbe1d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4px76_calico-system(68135f19-2992-417d-b99b-f5dddddbe1d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af4b15e02e4088e8f0ff5d4205b13ad56d05a0b29c76e34432c5195d30f73e01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:53:01.617916 containerd[1640]: time="2025-12-16T03:53:01.617379215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ngtdd,Uid:d650a02a-f60f-480c-a3b2-4b144ff8489f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b88b33ca8fc448b1b6282d6be0e83a71ec9db6087efac0838e9527b031f55320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:53:01.618293 kubelet[2985]: E1216 03:53:01.617615 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b88b33ca8fc448b1b6282d6be0e83a71ec9db6087efac0838e9527b031f55320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:53:01.618293 kubelet[2985]: E1216 03:53:01.617662 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b88b33ca8fc448b1b6282d6be0e83a71ec9db6087efac0838e9527b031f55320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ngtdd" Dec 16 03:53:01.618293 kubelet[2985]: E1216 03:53:01.617711 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b88b33ca8fc448b1b6282d6be0e83a71ec9db6087efac0838e9527b031f55320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ngtdd" Dec 16 03:53:01.618493 kubelet[2985]: E1216 03:53:01.618173 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-ngtdd_calico-system(d650a02a-f60f-480c-a3b2-4b144ff8489f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-ngtdd_calico-system(d650a02a-f60f-480c-a3b2-4b144ff8489f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b88b33ca8fc448b1b6282d6be0e83a71ec9db6087efac0838e9527b031f55320\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ngtdd" podUID="d650a02a-f60f-480c-a3b2-4b144ff8489f" Dec 16 03:53:01.731160 containerd[1640]: time="2025-12-16T03:53:01.731071732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:53:01.745531 containerd[1640]: time="2025-12-16T03:53:01.745468588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 03:53:01.778758 containerd[1640]: time="2025-12-16T03:53:01.777812967Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:53:01.782007 containerd[1640]: time="2025-12-16T03:53:01.781965139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:53:01.782997 containerd[1640]: time="2025-12-16T03:53:01.782956042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 14.263664905s" Dec 16 03:53:01.783145 containerd[1640]: time="2025-12-16T03:53:01.783116887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 03:53:01.849791 containerd[1640]: time="2025-12-16T03:53:01.849112728Z" level=info msg="CreateContainer within sandbox \"816a5814d707b62b3cba02e420f8e025ac235ef93c23f02b7e2bd4c6de376887\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 03:53:01.944159 containerd[1640]: time="2025-12-16T03:53:01.944077803Z" level=info msg="Container 49def7ff262ee352753195084d86760a16ab89e6725f7a5f9421345aecf8268f: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:53:01.999737 containerd[1640]: time="2025-12-16T03:53:01.999675956Z" level=info msg="CreateContainer within sandbox \"816a5814d707b62b3cba02e420f8e025ac235ef93c23f02b7e2bd4c6de376887\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"49def7ff262ee352753195084d86760a16ab89e6725f7a5f9421345aecf8268f\"" Dec 16 03:53:02.001844 containerd[1640]: time="2025-12-16T03:53:02.001477170Z" level=info msg="StartContainer for \"49def7ff262ee352753195084d86760a16ab89e6725f7a5f9421345aecf8268f\"" Dec 16 03:53:02.012962 containerd[1640]: time="2025-12-16T03:53:02.012643938Z" level=info msg="connecting to shim 49def7ff262ee352753195084d86760a16ab89e6725f7a5f9421345aecf8268f" address="unix:///run/containerd/s/f62444bc3037e9ebd1bfdcd82e6fd65bceb964606774d7b097bb7f9680d87fa2" protocol=ttrpc version=3 Dec 16 03:53:02.138411 systemd[1]: Started cri-containerd-49def7ff262ee352753195084d86760a16ab89e6725f7a5f9421345aecf8268f.scope - libcontainer container 49def7ff262ee352753195084d86760a16ab89e6725f7a5f9421345aecf8268f. Dec 16 03:53:02.192113 containerd[1640]: time="2025-12-16T03:53:02.192038262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d6c88684d-r9b2s,Uid:bd783b73-c05d-44ca-8c17-099b3c38bb45,Namespace:calico-system,Attempt:0,}" Dec 16 03:53:02.234000 audit: BPF prog-id=176 op=LOAD Dec 16 03:53:02.234000 audit[4174]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3513 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:02.243028 kernel: audit: type=1334 audit(1765857182.234:578): prog-id=176 op=LOAD Dec 16 03:53:02.243131 kernel: audit: type=1300 audit(1765857182.234:578): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3513 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:02.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439646566376666323632656533353237353331393530383464383637 Dec 16 03:53:02.248067 kernel: audit: type=1327 audit(1765857182.234:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439646566376666323632656533353237353331393530383464383637 Dec 16 03:53:02.234000 audit: BPF prog-id=177 op=LOAD Dec 16 03:53:02.252206 kernel: audit: type=1334 audit(1765857182.234:579): prog-id=177 op=LOAD Dec 16 03:53:02.234000 audit[4174]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3513 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:02.255947 kernel: audit: type=1300 audit(1765857182.234:579): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3513 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:02.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439646566376666323632656533353237353331393530383464383637 Dec 16 03:53:02.268094 systemd[1]: run-netns-cni\x2d35aace1c\x2d03c0\x2d5a46\x2d06cd\x2d84f1bdbf89e6.mount: Deactivated successfully. Dec 16 03:53:02.268457 systemd[1]: run-netns-cni\x2dffadf199\x2d5680\x2d09b8\x2dc601\x2d80c4d93d994c.mount: Deactivated successfully. Dec 16 03:53:02.268691 systemd[1]: run-netns-cni\x2d686d570b\x2d41ae\x2ddfd8\x2d4696\x2d67e446a80f41.mount: Deactivated successfully. Dec 16 03:53:02.269055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4147853197.mount: Deactivated successfully. Dec 16 03:53:02.271944 kernel: audit: type=1327 audit(1765857182.234:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439646566376666323632656533353237353331393530383464383637 Dec 16 03:53:02.280750 kernel: audit: type=1334 audit(1765857182.234:580): prog-id=177 op=UNLOAD Dec 16 03:53:02.234000 audit: BPF prog-id=177 op=UNLOAD Dec 16 03:53:02.234000 audit[4174]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3513 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:02.287776 kernel: audit: type=1300 audit(1765857182.234:580): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3513 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:02.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439646566376666323632656533353237353331393530383464383637 Dec 16 03:53:02.295758 kernel: audit: type=1327 audit(1765857182.234:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439646566376666323632656533353237353331393530383464383637 Dec 16 03:53:02.234000 audit: BPF prog-id=176 op=UNLOAD Dec 16 03:53:02.298839 kernel: audit: type=1334 audit(1765857182.234:581): prog-id=176 op=UNLOAD Dec 16 03:53:02.234000 audit[4174]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3513 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:02.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439646566376666323632656533353237353331393530383464383637 Dec 16 03:53:02.234000 audit: BPF prog-id=178 op=LOAD Dec 16 03:53:02.234000 audit[4174]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3513 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:02.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439646566376666323632656533353237353331393530383464383637 Dec 16 03:53:02.350323 containerd[1640]: time="2025-12-16T03:53:02.350228744Z" level=info msg="StartContainer for \"49def7ff262ee352753195084d86760a16ab89e6725f7a5f9421345aecf8268f\" returns successfully" Dec 16 03:53:02.370513 containerd[1640]: time="2025-12-16T03:53:02.370411820Z" level=error msg="Failed to destroy network for sandbox \"aa1ea7c765a1090435d2e4f1bbce768b8709fa2c7968c14977e2d7503a0dbec5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:53:02.372922 containerd[1640]: time="2025-12-16T03:53:02.372762987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d6c88684d-r9b2s,Uid:bd783b73-c05d-44ca-8c17-099b3c38bb45,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa1ea7c765a1090435d2e4f1bbce768b8709fa2c7968c14977e2d7503a0dbec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:53:02.373174 kubelet[2985]: E1216 03:53:02.373106 2985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa1ea7c765a1090435d2e4f1bbce768b8709fa2c7968c14977e2d7503a0dbec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:53:02.373393 kubelet[2985]: E1216 03:53:02.373227 2985 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa1ea7c765a1090435d2e4f1bbce768b8709fa2c7968c14977e2d7503a0dbec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d6c88684d-r9b2s" Dec 16 03:53:02.373393 kubelet[2985]: E1216 03:53:02.373285 2985 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa1ea7c765a1090435d2e4f1bbce768b8709fa2c7968c14977e2d7503a0dbec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d6c88684d-r9b2s" Dec 16 03:53:02.376850 kubelet[2985]: E1216 03:53:02.373868 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d6c88684d-r9b2s_calico-system(bd783b73-c05d-44ca-8c17-099b3c38bb45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d6c88684d-r9b2s_calico-system(bd783b73-c05d-44ca-8c17-099b3c38bb45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa1ea7c765a1090435d2e4f1bbce768b8709fa2c7968c14977e2d7503a0dbec5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d6c88684d-r9b2s" podUID="bd783b73-c05d-44ca-8c17-099b3c38bb45" Dec 16 03:53:02.378456 systemd[1]: run-netns-cni\x2decc922ed\x2d225f\x2d67bc\x2db06d\x2d23acff98bdfb.mount: Deactivated successfully. Dec 16 03:53:02.803916 kubelet[2985]: I1216 03:53:02.801828 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sdkdt" podStartSLOduration=1.997426675 podStartE2EDuration="32.788661428s" podCreationTimestamp="2025-12-16 03:52:30 +0000 UTC" firstStartedPulling="2025-12-16 03:52:30.994952606 +0000 UTC m=+27.009579615" lastFinishedPulling="2025-12-16 03:53:01.786187343 +0000 UTC m=+57.800814368" observedRunningTime="2025-12-16 03:53:02.776072625 +0000 UTC m=+58.790699662" watchObservedRunningTime="2025-12-16 03:53:02.788661428 +0000 UTC m=+58.803288445" Dec 16 03:53:03.083855 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 03:53:03.084025 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 03:53:03.510746 kubelet[2985]: I1216 03:53:03.510122 2985 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd783b73-c05d-44ca-8c17-099b3c38bb45-whisker-backend-key-pair\") pod \"bd783b73-c05d-44ca-8c17-099b3c38bb45\" (UID: \"bd783b73-c05d-44ca-8c17-099b3c38bb45\") " Dec 16 03:53:03.510746 kubelet[2985]: I1216 03:53:03.510193 2985 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd783b73-c05d-44ca-8c17-099b3c38bb45-whisker-ca-bundle\") pod \"bd783b73-c05d-44ca-8c17-099b3c38bb45\" (UID: \"bd783b73-c05d-44ca-8c17-099b3c38bb45\") " Dec 16 03:53:03.510746 kubelet[2985]: I1216 03:53:03.510238 2985 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtpzd\" (UniqueName: \"kubernetes.io/projected/bd783b73-c05d-44ca-8c17-099b3c38bb45-kube-api-access-rtpzd\") pod \"bd783b73-c05d-44ca-8c17-099b3c38bb45\" (UID: \"bd783b73-c05d-44ca-8c17-099b3c38bb45\") " Dec 16 03:53:03.536634 kubelet[2985]: I1216 03:53:03.536565 2985 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd783b73-c05d-44ca-8c17-099b3c38bb45-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bd783b73-c05d-44ca-8c17-099b3c38bb45" (UID: "bd783b73-c05d-44ca-8c17-099b3c38bb45"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 03:53:03.545880 systemd[1]: var-lib-kubelet-pods-bd783b73\x2dc05d\x2d44ca\x2d8c17\x2d099b3c38bb45-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drtpzd.mount: Deactivated successfully. Dec 16 03:53:03.546032 systemd[1]: var-lib-kubelet-pods-bd783b73\x2dc05d\x2d44ca\x2d8c17\x2d099b3c38bb45-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 03:53:03.549429 kubelet[2985]: I1216 03:53:03.549315 2985 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd783b73-c05d-44ca-8c17-099b3c38bb45-kube-api-access-rtpzd" (OuterVolumeSpecName: "kube-api-access-rtpzd") pod "bd783b73-c05d-44ca-8c17-099b3c38bb45" (UID: "bd783b73-c05d-44ca-8c17-099b3c38bb45"). InnerVolumeSpecName "kube-api-access-rtpzd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 03:53:03.551966 kubelet[2985]: I1216 03:53:03.551837 2985 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd783b73-c05d-44ca-8c17-099b3c38bb45-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bd783b73-c05d-44ca-8c17-099b3c38bb45" (UID: "bd783b73-c05d-44ca-8c17-099b3c38bb45"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 03:53:03.613420 kubelet[2985]: I1216 03:53:03.613188 2985 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd783b73-c05d-44ca-8c17-099b3c38bb45-whisker-ca-bundle\") on node \"srv-n64tt.gb1.brightbox.com\" DevicePath \"\"" Dec 16 03:53:03.613420 kubelet[2985]: I1216 03:53:03.613417 2985 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rtpzd\" (UniqueName: \"kubernetes.io/projected/bd783b73-c05d-44ca-8c17-099b3c38bb45-kube-api-access-rtpzd\") on node \"srv-n64tt.gb1.brightbox.com\" DevicePath \"\"" Dec 16 03:53:03.613676 kubelet[2985]: I1216 03:53:03.613443 2985 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd783b73-c05d-44ca-8c17-099b3c38bb45-whisker-backend-key-pair\") on node \"srv-n64tt.gb1.brightbox.com\" DevicePath \"\"" Dec 16 03:53:03.653514 systemd[1]: Removed slice kubepods-besteffort-podbd783b73_c05d_44ca_8c17_099b3c38bb45.slice - libcontainer container kubepods-besteffort-podbd783b73_c05d_44ca_8c17_099b3c38bb45.slice. Dec 16 03:53:03.835637 systemd[1]: Created slice kubepods-besteffort-podcc879cee_dcea_4d9b_a071_304996197deb.slice - libcontainer container kubepods-besteffort-podcc879cee_dcea_4d9b_a071_304996197deb.slice. Dec 16 03:53:03.926390 kubelet[2985]: I1216 03:53:03.926261 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlhfb\" (UniqueName: \"kubernetes.io/projected/cc879cee-dcea-4d9b-a071-304996197deb-kube-api-access-tlhfb\") pod \"whisker-79b9867c46-mhs6d\" (UID: \"cc879cee-dcea-4d9b-a071-304996197deb\") " pod="calico-system/whisker-79b9867c46-mhs6d" Dec 16 03:53:03.927260 kubelet[2985]: I1216 03:53:03.927051 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc879cee-dcea-4d9b-a071-304996197deb-whisker-backend-key-pair\") pod \"whisker-79b9867c46-mhs6d\" (UID: \"cc879cee-dcea-4d9b-a071-304996197deb\") " pod="calico-system/whisker-79b9867c46-mhs6d" Dec 16 03:53:03.927260 kubelet[2985]: I1216 03:53:03.927174 2985 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc879cee-dcea-4d9b-a071-304996197deb-whisker-ca-bundle\") pod \"whisker-79b9867c46-mhs6d\" (UID: \"cc879cee-dcea-4d9b-a071-304996197deb\") " pod="calico-system/whisker-79b9867c46-mhs6d" Dec 16 03:53:04.144783 containerd[1640]: time="2025-12-16T03:53:04.144612958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b9867c46-mhs6d,Uid:cc879cee-dcea-4d9b-a071-304996197deb,Namespace:calico-system,Attempt:0,}" Dec 16 03:53:04.203394 kubelet[2985]: I1216 03:53:04.203306 2985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd783b73-c05d-44ca-8c17-099b3c38bb45" path="/var/lib/kubelet/pods/bd783b73-c05d-44ca-8c17-099b3c38bb45/volumes" Dec 16 03:53:04.567703 systemd-networkd[1555]: cali70b3657b34c: Link UP Dec 16 03:53:04.569498 systemd-networkd[1555]: cali70b3657b34c: Gained carrier Dec 16 03:53:04.594734 containerd[1640]: 2025-12-16 03:53:04.196 [INFO][4319] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:53:04.594734 containerd[1640]: 2025-12-16 03:53:04.240 [INFO][4319] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n64tt.gb1.brightbox.com-k8s-whisker--79b9867c46--mhs6d-eth0 whisker-79b9867c46- calico-system cc879cee-dcea-4d9b-a071-304996197deb 934 0 2025-12-16 03:53:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79b9867c46 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-n64tt.gb1.brightbox.com whisker-79b9867c46-mhs6d eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali70b3657b34c [] [] }} ContainerID="870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" Namespace="calico-system" Pod="whisker-79b9867c46-mhs6d" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-whisker--79b9867c46--mhs6d-" Dec 16 03:53:04.594734 containerd[1640]: 2025-12-16 03:53:04.240 [INFO][4319] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" Namespace="calico-system" Pod="whisker-79b9867c46-mhs6d" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-whisker--79b9867c46--mhs6d-eth0" Dec 16 03:53:04.594734 containerd[1640]: 2025-12-16 03:53:04.486 [INFO][4333] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" HandleID="k8s-pod-network.870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" Workload="srv--n64tt.gb1.brightbox.com-k8s-whisker--79b9867c46--mhs6d-eth0" Dec 16 03:53:04.597299 containerd[1640]: 2025-12-16 03:53:04.488 [INFO][4333] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" HandleID="k8s-pod-network.870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" Workload="srv--n64tt.gb1.brightbox.com-k8s-whisker--79b9867c46--mhs6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000363760), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-n64tt.gb1.brightbox.com", "pod":"whisker-79b9867c46-mhs6d", "timestamp":"2025-12-16 03:53:04.486270366 +0000 UTC"}, Hostname:"srv-n64tt.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:53:04.597299 containerd[1640]: 2025-12-16 03:53:04.489 [INFO][4333] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:53:04.597299 containerd[1640]: 2025-12-16 03:53:04.489 [INFO][4333] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:53:04.597299 containerd[1640]: 2025-12-16 03:53:04.490 [INFO][4333] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n64tt.gb1.brightbox.com' Dec 16 03:53:04.597299 containerd[1640]: 2025-12-16 03:53:04.509 [INFO][4333] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:04.597299 containerd[1640]: 2025-12-16 03:53:04.520 [INFO][4333] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:04.597299 containerd[1640]: 2025-12-16 03:53:04.526 [INFO][4333] ipam/ipam.go 511: Trying affinity for 192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:04.597299 containerd[1640]: 2025-12-16 03:53:04.528 [INFO][4333] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:04.597299 containerd[1640]: 2025-12-16 03:53:04.531 [INFO][4333] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:04.601033 containerd[1640]: 2025-12-16 03:53:04.531 [INFO][4333] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:04.601033 containerd[1640]: 2025-12-16 03:53:04.533 [INFO][4333] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a Dec 16 03:53:04.601033 containerd[1640]: 2025-12-16 03:53:04.539 [INFO][4333] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:04.601033 containerd[1640]: 2025-12-16 03:53:04.545 [INFO][4333] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.65/26] block=192.168.124.64/26 handle="k8s-pod-network.870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:04.601033 containerd[1640]: 2025-12-16 03:53:04.546 [INFO][4333] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.65/26] handle="k8s-pod-network.870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:04.601033 containerd[1640]: 2025-12-16 03:53:04.546 [INFO][4333] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:53:04.601033 containerd[1640]: 2025-12-16 03:53:04.546 [INFO][4333] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.65/26] IPv6=[] ContainerID="870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" HandleID="k8s-pod-network.870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" Workload="srv--n64tt.gb1.brightbox.com-k8s-whisker--79b9867c46--mhs6d-eth0" Dec 16 03:53:04.601482 containerd[1640]: 2025-12-16 03:53:04.550 [INFO][4319] cni-plugin/k8s.go 418: Populated endpoint ContainerID="870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" Namespace="calico-system" Pod="whisker-79b9867c46-mhs6d" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-whisker--79b9867c46--mhs6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-whisker--79b9867c46--mhs6d-eth0", GenerateName:"whisker-79b9867c46-", Namespace:"calico-system", SelfLink:"", UID:"cc879cee-dcea-4d9b-a071-304996197deb", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79b9867c46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"", Pod:"whisker-79b9867c46-mhs6d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali70b3657b34c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:04.601482 containerd[1640]: 2025-12-16 03:53:04.550 [INFO][4319] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.65/32] ContainerID="870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" Namespace="calico-system" Pod="whisker-79b9867c46-mhs6d" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-whisker--79b9867c46--mhs6d-eth0" Dec 16 03:53:04.601675 containerd[1640]: 2025-12-16 03:53:04.550 [INFO][4319] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70b3657b34c ContainerID="870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" Namespace="calico-system" Pod="whisker-79b9867c46-mhs6d" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-whisker--79b9867c46--mhs6d-eth0" Dec 16 03:53:04.601675 containerd[1640]: 2025-12-16 03:53:04.572 [INFO][4319] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" Namespace="calico-system" Pod="whisker-79b9867c46-mhs6d" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-whisker--79b9867c46--mhs6d-eth0" Dec 16 03:53:04.603203 containerd[1640]: 2025-12-16 03:53:04.573 [INFO][4319] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" Namespace="calico-system" Pod="whisker-79b9867c46-mhs6d" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-whisker--79b9867c46--mhs6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-whisker--79b9867c46--mhs6d-eth0", GenerateName:"whisker-79b9867c46-", Namespace:"calico-system", SelfLink:"", UID:"cc879cee-dcea-4d9b-a071-304996197deb", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79b9867c46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a", Pod:"whisker-79b9867c46-mhs6d", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali70b3657b34c", MAC:"66:69:91:97:36:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:04.603385 containerd[1640]: 2025-12-16 03:53:04.590 [INFO][4319] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" Namespace="calico-system" Pod="whisker-79b9867c46-mhs6d" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-whisker--79b9867c46--mhs6d-eth0" Dec 16 03:53:04.794355 containerd[1640]: time="2025-12-16T03:53:04.792512794Z" level=info msg="connecting to shim 870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a" address="unix:///run/containerd/s/0cbb184c8e309465ecedb2f477320053f98eabcb8d83eee06b5804439c18f324" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:53:04.896046 systemd[1]: Started cri-containerd-870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a.scope - libcontainer container 870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a. Dec 16 03:53:04.924000 audit: BPF prog-id=179 op=LOAD Dec 16 03:53:04.925000 audit: BPF prog-id=180 op=LOAD Dec 16 03:53:04.925000 audit[4394]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4381 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:04.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837306538353037383466383337633062613965383536353132646534 Dec 16 03:53:04.925000 audit: BPF prog-id=180 op=UNLOAD Dec 16 03:53:04.925000 audit[4394]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4381 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:04.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837306538353037383466383337633062613965383536353132646534 Dec 16 03:53:04.925000 audit: BPF prog-id=181 op=LOAD Dec 16 03:53:04.925000 audit[4394]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4381 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:04.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837306538353037383466383337633062613965383536353132646534 Dec 16 03:53:04.925000 audit: BPF prog-id=182 op=LOAD Dec 16 03:53:04.925000 audit[4394]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4381 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:04.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837306538353037383466383337633062613965383536353132646534 Dec 16 03:53:04.925000 audit: BPF prog-id=182 op=UNLOAD Dec 16 03:53:04.925000 audit[4394]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4381 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:04.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837306538353037383466383337633062613965383536353132646534 Dec 16 03:53:04.925000 audit: BPF prog-id=181 op=UNLOAD Dec 16 03:53:04.925000 audit[4394]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4381 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:04.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837306538353037383466383337633062613965383536353132646534 Dec 16 03:53:04.925000 audit: BPF prog-id=183 op=LOAD Dec 16 03:53:04.925000 audit[4394]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4381 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:04.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837306538353037383466383337633062613965383536353132646534 Dec 16 03:53:04.994764 containerd[1640]: time="2025-12-16T03:53:04.994201804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b9867c46-mhs6d,Uid:cc879cee-dcea-4d9b-a071-304996197deb,Namespace:calico-system,Attempt:0,} returns sandbox id \"870e850784f837c0ba9e856512de4de2cf9754b9868223590171403a71e32b4a\"" Dec 16 03:53:05.012173 containerd[1640]: time="2025-12-16T03:53:05.012117814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:53:05.339571 containerd[1640]: time="2025-12-16T03:53:05.339455418Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:05.352781 containerd[1640]: time="2025-12-16T03:53:05.352651328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:53:05.353915 containerd[1640]: time="2025-12-16T03:53:05.353142316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:05.370957 kubelet[2985]: E1216 03:53:05.365182 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:53:05.374927 kubelet[2985]: E1216 03:53:05.374869 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:53:05.380811 kubelet[2985]: E1216 03:53:05.380731 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a60c07445d084b0f859deb3c2d6c59e4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tlhfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9867c46-mhs6d_calico-system(cc879cee-dcea-4d9b-a071-304996197deb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:05.386424 containerd[1640]: time="2025-12-16T03:53:05.385440762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:53:05.595570 systemd-networkd[1555]: cali70b3657b34c: Gained IPv6LL Dec 16 03:53:05.697992 containerd[1640]: time="2025-12-16T03:53:05.697880467Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:05.700627 containerd[1640]: time="2025-12-16T03:53:05.700571695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:53:05.700807 containerd[1640]: time="2025-12-16T03:53:05.700687176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:05.701141 kubelet[2985]: E1216 03:53:05.701064 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:53:05.701274 kubelet[2985]: E1216 03:53:05.701184 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:53:05.701834 kubelet[2985]: E1216 03:53:05.701414 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlhfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9867c46-mhs6d_calico-system(cc879cee-dcea-4d9b-a071-304996197deb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:05.702771 kubelet[2985]: E1216 03:53:05.702674 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9867c46-mhs6d" podUID="cc879cee-dcea-4d9b-a071-304996197deb" Dec 16 03:53:05.739000 audit: BPF prog-id=184 op=LOAD Dec 16 03:53:05.739000 audit[4538]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe20b65ae0 a2=98 a3=1fffffffffffffff items=0 ppid=4432 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:05.739000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:53:05.739000 audit: BPF prog-id=184 op=UNLOAD Dec 16 03:53:05.739000 audit[4538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe20b65ab0 a3=0 items=0 ppid=4432 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:05.739000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:53:05.740000 audit: BPF prog-id=185 op=LOAD Dec 16 03:53:05.740000 audit[4538]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe20b659c0 a2=94 a3=3 items=0 ppid=4432 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:05.740000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:53:05.740000 audit: BPF prog-id=185 op=UNLOAD Dec 16 03:53:05.740000 audit[4538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe20b659c0 a2=94 a3=3 items=0 ppid=4432 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:05.740000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:53:05.740000 audit: BPF prog-id=186 op=LOAD Dec 16 03:53:05.740000 audit[4538]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe20b65a00 a2=94 a3=7ffe20b65be0 items=0 ppid=4432 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:05.740000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:53:05.740000 audit: BPF prog-id=186 op=UNLOAD Dec 16 03:53:05.740000 audit[4538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe20b65a00 a2=94 a3=7ffe20b65be0 items=0 ppid=4432 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:05.740000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:53:05.744000 audit: BPF prog-id=187 op=LOAD Dec 16 03:53:05.744000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe045bb360 a2=98 a3=3 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:05.744000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:05.744000 audit: BPF prog-id=187 op=UNLOAD Dec 16 03:53:05.744000 audit[4539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe045bb330 a3=0 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:05.744000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:05.745000 audit: BPF prog-id=188 op=LOAD Dec 16 03:53:05.745000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe045bb150 a2=94 a3=54428f items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:05.745000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:05.746000 audit: BPF prog-id=188 op=UNLOAD Dec 16 03:53:05.746000 audit[4539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe045bb150 a2=94 a3=54428f items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:05.746000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:05.746000 audit: BPF prog-id=189 op=LOAD Dec 16 03:53:05.746000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe045bb180 a2=94 a3=2 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:05.746000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:05.746000 audit: BPF prog-id=189 op=UNLOAD Dec 16 03:53:05.746000 audit[4539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe045bb180 a2=0 a3=2 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:05.746000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:06.021000 audit: BPF prog-id=190 op=LOAD Dec 16 03:53:06.021000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe045bb040 a2=94 a3=1 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.021000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:06.021000 audit: BPF prog-id=190 op=UNLOAD Dec 16 03:53:06.021000 audit[4539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe045bb040 a2=94 a3=1 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.021000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:06.041000 audit: BPF prog-id=191 op=LOAD Dec 16 03:53:06.041000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe045bb030 a2=94 a3=4 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.041000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:06.041000 audit: BPF prog-id=191 op=UNLOAD Dec 16 03:53:06.041000 audit[4539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe045bb030 a2=0 a3=4 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.041000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:06.041000 audit: BPF prog-id=192 op=LOAD Dec 16 03:53:06.041000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe045bae90 a2=94 a3=5 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.041000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:06.041000 audit: BPF prog-id=192 op=UNLOAD Dec 16 03:53:06.041000 audit[4539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe045bae90 a2=0 a3=5 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.041000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:06.042000 audit: BPF prog-id=193 op=LOAD Dec 16 03:53:06.042000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe045bb0b0 a2=94 a3=6 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.042000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:06.043000 audit: BPF prog-id=193 op=UNLOAD Dec 16 03:53:06.043000 audit[4539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe045bb0b0 a2=0 a3=6 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.043000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:06.044000 audit: BPF prog-id=194 op=LOAD Dec 16 03:53:06.044000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe045ba860 a2=94 a3=88 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.044000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:06.044000 audit: BPF prog-id=195 op=LOAD Dec 16 03:53:06.044000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe045ba6e0 a2=94 a3=2 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.044000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:06.045000 audit: BPF prog-id=195 op=UNLOAD Dec 16 03:53:06.045000 audit[4539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe045ba710 a2=0 a3=7ffe045ba810 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.045000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:06.046000 audit: BPF prog-id=194 op=UNLOAD Dec 16 03:53:06.046000 audit[4539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=292bed10 a2=0 a3=99c5bba408537768 items=0 ppid=4432 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.046000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:53:06.065000 audit: BPF prog-id=196 op=LOAD Dec 16 03:53:06.065000 audit[4542]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea672c910 a2=98 a3=1999999999999999 items=0 ppid=4432 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.065000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:53:06.065000 audit: BPF prog-id=196 op=UNLOAD Dec 16 03:53:06.065000 audit[4542]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffea672c8e0 a3=0 items=0 ppid=4432 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.065000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:53:06.065000 audit: BPF prog-id=197 op=LOAD Dec 16 03:53:06.065000 audit[4542]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea672c7f0 a2=94 a3=ffff items=0 ppid=4432 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.065000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:53:06.065000 audit: BPF prog-id=197 op=UNLOAD Dec 16 03:53:06.065000 audit[4542]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffea672c7f0 a2=94 a3=ffff items=0 ppid=4432 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.065000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:53:06.065000 audit: BPF prog-id=198 op=LOAD Dec 16 03:53:06.065000 audit[4542]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea672c830 a2=94 a3=7ffea672ca10 items=0 ppid=4432 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.065000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:53:06.065000 audit: BPF prog-id=198 op=UNLOAD Dec 16 03:53:06.065000 audit[4542]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffea672c830 a2=94 a3=7ffea672ca10 items=0 ppid=4432 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.065000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:53:06.161783 systemd-networkd[1555]: vxlan.calico: Link UP Dec 16 03:53:06.161798 systemd-networkd[1555]: vxlan.calico: Gained carrier Dec 16 03:53:06.204000 audit: BPF prog-id=199 op=LOAD Dec 16 03:53:06.204000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbd2ace30 a2=98 a3=20 items=0 ppid=4432 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.204000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:53:06.206000 audit: BPF prog-id=199 op=UNLOAD Dec 16 03:53:06.206000 audit[4569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdbd2ace00 a3=0 items=0 ppid=4432 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.206000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:53:06.206000 audit: BPF prog-id=200 op=LOAD Dec 16 03:53:06.206000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbd2acc40 a2=94 a3=54428f items=0 ppid=4432 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.206000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:53:06.207000 audit: BPF prog-id=200 op=UNLOAD Dec 16 03:53:06.207000 audit[4569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdbd2acc40 a2=94 a3=54428f items=0 ppid=4432 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.207000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:53:06.207000 audit: BPF prog-id=201 op=LOAD Dec 16 03:53:06.207000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbd2acc70 a2=94 a3=2 items=0 ppid=4432 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.207000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:53:06.207000 audit: BPF prog-id=201 op=UNLOAD Dec 16 03:53:06.207000 audit[4569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdbd2acc70 a2=0 a3=2 items=0 ppid=4432 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.207000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:53:06.207000 audit: BPF prog-id=202 op=LOAD Dec 16 03:53:06.207000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdbd2aca20 a2=94 a3=4 items=0 ppid=4432 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.207000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:53:06.207000 audit: BPF prog-id=202 op=UNLOAD Dec 16 03:53:06.207000 audit[4569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdbd2aca20 a2=94 a3=4 items=0 ppid=4432 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.207000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:53:06.207000 audit: BPF prog-id=203 op=LOAD Dec 16 03:53:06.207000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdbd2acb20 a2=94 a3=7ffdbd2acca0 items=0 ppid=4432 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.207000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:53:06.207000 audit: BPF prog-id=203 op=UNLOAD Dec 16 03:53:06.207000 audit[4569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdbd2acb20 a2=0 a3=7ffdbd2acca0 items=0 ppid=4432 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.207000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:53:06.210000 audit: BPF prog-id=204 op=LOAD Dec 16 03:53:06.210000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdbd2ac250 a2=94 a3=2 items=0 ppid=4432 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:53:06.210000 audit: BPF prog-id=204 op=UNLOAD Dec 16 03:53:06.210000 audit[4569]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdbd2ac250 a2=0 a3=2 items=0 ppid=4432 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:53:06.210000 audit: BPF prog-id=205 op=LOAD Dec 16 03:53:06.210000 audit[4569]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdbd2ac350 a2=94 a3=30 items=0 ppid=4432 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.210000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:53:06.228000 audit: BPF prog-id=206 op=LOAD Dec 16 03:53:06.228000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff0f21ba80 a2=98 a3=0 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.229000 audit: BPF prog-id=206 op=UNLOAD Dec 16 03:53:06.229000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff0f21ba50 a3=0 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.229000 audit: BPF prog-id=207 op=LOAD Dec 16 03:53:06.229000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff0f21b870 a2=94 a3=54428f items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.229000 audit: BPF prog-id=207 op=UNLOAD Dec 16 03:53:06.229000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff0f21b870 a2=94 a3=54428f items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.229000 audit: BPF prog-id=208 op=LOAD Dec 16 03:53:06.229000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff0f21b8a0 a2=94 a3=2 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.229000 audit: BPF prog-id=208 op=UNLOAD Dec 16 03:53:06.229000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff0f21b8a0 a2=0 a3=2 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.494000 audit: BPF prog-id=209 op=LOAD Dec 16 03:53:06.494000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff0f21b760 a2=94 a3=1 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.494000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.494000 audit: BPF prog-id=209 op=UNLOAD Dec 16 03:53:06.494000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff0f21b760 a2=94 a3=1 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.494000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.508000 audit: BPF prog-id=210 op=LOAD Dec 16 03:53:06.508000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff0f21b750 a2=94 a3=4 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.508000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.508000 audit: BPF prog-id=210 op=UNLOAD Dec 16 03:53:06.508000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff0f21b750 a2=0 a3=4 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.508000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.509000 audit: BPF prog-id=211 op=LOAD Dec 16 03:53:06.509000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff0f21b5b0 a2=94 a3=5 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.509000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.509000 audit: BPF prog-id=211 op=UNLOAD Dec 16 03:53:06.509000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff0f21b5b0 a2=0 a3=5 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.509000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.509000 audit: BPF prog-id=212 op=LOAD Dec 16 03:53:06.509000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff0f21b7d0 a2=94 a3=6 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.509000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.509000 audit: BPF prog-id=212 op=UNLOAD Dec 16 03:53:06.509000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff0f21b7d0 a2=0 a3=6 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.509000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.509000 audit: BPF prog-id=213 op=LOAD Dec 16 03:53:06.509000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff0f21af80 a2=94 a3=88 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.509000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.510000 audit: BPF prog-id=214 op=LOAD Dec 16 03:53:06.510000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff0f21ae00 a2=94 a3=2 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.510000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.510000 audit: BPF prog-id=214 op=UNLOAD Dec 16 03:53:06.510000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff0f21ae30 a2=0 a3=7fff0f21af30 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.510000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.510000 audit: BPF prog-id=213 op=UNLOAD Dec 16 03:53:06.510000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=10807d10 a2=0 a3=6fdc1a7dbcce8603 items=0 ppid=4432 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.510000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:53:06.519000 audit: BPF prog-id=205 op=UNLOAD Dec 16 03:53:06.519000 audit[4432]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000ccd440 a2=0 a3=0 items=0 ppid=4425 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.519000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 03:53:06.598000 audit[4605]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4605 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:53:06.598000 audit[4605]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdb5a756d0 a2=0 a3=7ffdb5a756bc items=0 ppid=4432 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.598000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:53:06.608000 audit[4608]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4608 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:53:06.608000 audit[4608]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe1f998a50 a2=0 a3=7ffe1f998a3c items=0 ppid=4432 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.608000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:53:06.615000 audit[4606]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4606 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:53:06.615000 audit[4606]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc56cbd540 a2=0 a3=7ffc56cbd52c items=0 ppid=4432 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.615000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:53:06.619000 audit[4607]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4607 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:53:06.619000 audit[4607]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffe13afb4c0 a2=0 a3=7ffe13afb4ac items=0 ppid=4432 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.619000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:53:06.649160 kubelet[2985]: E1216 03:53:06.649081 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9867c46-mhs6d" podUID="cc879cee-dcea-4d9b-a071-304996197deb" Dec 16 03:53:06.693000 audit[4623]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:06.693000 audit[4623]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffffda012c0 a2=0 a3=7ffffda012ac items=0 ppid=3138 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.693000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:06.705000 audit[4623]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:06.705000 audit[4623]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffffda012c0 a2=0 a3=0 items=0 ppid=3138 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:06.705000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:07.511062 systemd-networkd[1555]: vxlan.calico: Gained IPv6LL Dec 16 03:53:09.192915 containerd[1640]: time="2025-12-16T03:53:09.192806969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xjdhd,Uid:2a73cfff-498a-4801-a9d9-f3b2fd31770b,Namespace:kube-system,Attempt:0,}" Dec 16 03:53:09.382096 systemd-networkd[1555]: cali4c39ea8d8e1: Link UP Dec 16 03:53:09.384686 systemd-networkd[1555]: cali4c39ea8d8e1: Gained carrier Dec 16 03:53:09.413579 containerd[1640]: 2025-12-16 03:53:09.265 [INFO][4627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--xjdhd-eth0 coredns-674b8bbfcf- kube-system 2a73cfff-498a-4801-a9d9-f3b2fd31770b 840 0 2025-12-16 03:52:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-n64tt.gb1.brightbox.com coredns-674b8bbfcf-xjdhd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4c39ea8d8e1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" Namespace="kube-system" Pod="coredns-674b8bbfcf-xjdhd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--xjdhd-" Dec 16 03:53:09.413579 containerd[1640]: 2025-12-16 03:53:09.265 [INFO][4627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" Namespace="kube-system" Pod="coredns-674b8bbfcf-xjdhd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--xjdhd-eth0" Dec 16 03:53:09.413579 containerd[1640]: 2025-12-16 03:53:09.314 [INFO][4639] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" HandleID="k8s-pod-network.f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" Workload="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--xjdhd-eth0" Dec 16 03:53:09.414052 containerd[1640]: 2025-12-16 03:53:09.314 [INFO][4639] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" HandleID="k8s-pod-network.f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" Workload="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--xjdhd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d51e0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-n64tt.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-xjdhd", "timestamp":"2025-12-16 03:53:09.314189033 +0000 UTC"}, Hostname:"srv-n64tt.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:53:09.414052 containerd[1640]: 2025-12-16 03:53:09.314 [INFO][4639] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:53:09.414052 containerd[1640]: 2025-12-16 03:53:09.314 [INFO][4639] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:53:09.414052 containerd[1640]: 2025-12-16 03:53:09.314 [INFO][4639] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n64tt.gb1.brightbox.com' Dec 16 03:53:09.414052 containerd[1640]: 2025-12-16 03:53:09.325 [INFO][4639] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:09.414052 containerd[1640]: 2025-12-16 03:53:09.333 [INFO][4639] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:09.414052 containerd[1640]: 2025-12-16 03:53:09.339 [INFO][4639] ipam/ipam.go 511: Trying affinity for 192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:09.414052 containerd[1640]: 2025-12-16 03:53:09.343 [INFO][4639] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:09.414052 containerd[1640]: 2025-12-16 03:53:09.346 [INFO][4639] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:09.415537 containerd[1640]: 2025-12-16 03:53:09.346 [INFO][4639] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:09.415537 containerd[1640]: 2025-12-16 03:53:09.348 [INFO][4639] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb Dec 16 03:53:09.415537 containerd[1640]: 2025-12-16 03:53:09.357 [INFO][4639] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:09.415537 containerd[1640]: 2025-12-16 03:53:09.367 [INFO][4639] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.66/26] block=192.168.124.64/26 handle="k8s-pod-network.f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:09.415537 containerd[1640]: 2025-12-16 03:53:09.368 [INFO][4639] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.66/26] handle="k8s-pod-network.f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:09.415537 containerd[1640]: 2025-12-16 03:53:09.368 [INFO][4639] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:53:09.415537 containerd[1640]: 2025-12-16 03:53:09.368 [INFO][4639] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.66/26] IPv6=[] ContainerID="f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" HandleID="k8s-pod-network.f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" Workload="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--xjdhd-eth0" Dec 16 03:53:09.416502 containerd[1640]: 2025-12-16 03:53:09.373 [INFO][4627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" Namespace="kube-system" Pod="coredns-674b8bbfcf-xjdhd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--xjdhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--xjdhd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2a73cfff-498a-4801-a9d9-f3b2fd31770b", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 52, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-xjdhd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c39ea8d8e1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:09.416502 containerd[1640]: 2025-12-16 03:53:09.373 [INFO][4627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.66/32] ContainerID="f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" Namespace="kube-system" Pod="coredns-674b8bbfcf-xjdhd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--xjdhd-eth0" Dec 16 03:53:09.416502 containerd[1640]: 2025-12-16 03:53:09.373 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c39ea8d8e1 ContainerID="f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" Namespace="kube-system" Pod="coredns-674b8bbfcf-xjdhd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--xjdhd-eth0" Dec 16 03:53:09.416502 containerd[1640]: 2025-12-16 03:53:09.386 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" Namespace="kube-system" Pod="coredns-674b8bbfcf-xjdhd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--xjdhd-eth0" Dec 16 03:53:09.416502 containerd[1640]: 2025-12-16 03:53:09.386 [INFO][4627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" Namespace="kube-system" Pod="coredns-674b8bbfcf-xjdhd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--xjdhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--xjdhd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2a73cfff-498a-4801-a9d9-f3b2fd31770b", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 52, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb", Pod:"coredns-674b8bbfcf-xjdhd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c39ea8d8e1", MAC:"fe:71:3f:64:76:81", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:09.416502 containerd[1640]: 2025-12-16 03:53:09.406 [INFO][4627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" Namespace="kube-system" Pod="coredns-674b8bbfcf-xjdhd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--xjdhd-eth0" Dec 16 03:53:09.454532 containerd[1640]: time="2025-12-16T03:53:09.454371160Z" level=info msg="connecting to shim f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb" address="unix:///run/containerd/s/70359ce5b5e54190bced64007713a0e1546b7574cb99747817de8ecbd96972f2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:53:09.461926 kernel: kauditd_printk_skb: 231 callbacks suppressed Dec 16 03:53:09.462031 kernel: audit: type=1325 audit(1765857189.456:659): table=filter:127 family=2 entries=42 op=nft_register_chain pid=4659 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:53:09.456000 audit[4659]: NETFILTER_CFG table=filter:127 family=2 entries=42 op=nft_register_chain pid=4659 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:53:09.468684 kernel: audit: type=1300 audit(1765857189.456:659): arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffdac4db660 a2=0 a3=7ffdac4db64c items=0 ppid=4432 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.456000 audit[4659]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffdac4db660 a2=0 a3=7ffdac4db64c items=0 ppid=4432 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.472740 kernel: audit: type=1327 audit(1765857189.456:659): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:53:09.456000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:53:09.517201 systemd[1]: Started cri-containerd-f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb.scope - libcontainer container f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb. Dec 16 03:53:09.536000 audit: BPF prog-id=215 op=LOAD Dec 16 03:53:09.538766 kernel: audit: type=1334 audit(1765857189.536:660): prog-id=215 op=LOAD Dec 16 03:53:09.538000 audit: BPF prog-id=216 op=LOAD Dec 16 03:53:09.538000 audit[4675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4664 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.543270 kernel: audit: type=1334 audit(1765857189.538:661): prog-id=216 op=LOAD Dec 16 03:53:09.543354 kernel: audit: type=1300 audit(1765857189.538:661): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4664 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316635363734626639623066633563653536643232646638613063 Dec 16 03:53:09.548902 kernel: audit: type=1327 audit(1765857189.538:661): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316635363734626639623066633563653536643232646638613063 Dec 16 03:53:09.538000 audit: BPF prog-id=216 op=UNLOAD Dec 16 03:53:09.553219 kernel: audit: type=1334 audit(1765857189.538:662): prog-id=216 op=UNLOAD Dec 16 03:53:09.538000 audit[4675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4664 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.556360 kernel: audit: type=1300 audit(1765857189.538:662): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4664 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316635363734626639623066633563653536643232646638613063 Dec 16 03:53:09.561911 kernel: audit: type=1327 audit(1765857189.538:662): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316635363734626639623066633563653536643232646638613063 Dec 16 03:53:09.540000 audit: BPF prog-id=217 op=LOAD Dec 16 03:53:09.540000 audit[4675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4664 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316635363734626639623066633563653536643232646638613063 Dec 16 03:53:09.540000 audit: BPF prog-id=218 op=LOAD Dec 16 03:53:09.540000 audit[4675]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4664 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316635363734626639623066633563653536643232646638613063 Dec 16 03:53:09.540000 audit: BPF prog-id=218 op=UNLOAD Dec 16 03:53:09.540000 audit[4675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4664 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316635363734626639623066633563653536643232646638613063 Dec 16 03:53:09.540000 audit: BPF prog-id=217 op=UNLOAD Dec 16 03:53:09.540000 audit[4675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4664 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316635363734626639623066633563653536643232646638613063 Dec 16 03:53:09.540000 audit: BPF prog-id=219 op=LOAD Dec 16 03:53:09.540000 audit[4675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4664 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636316635363734626639623066633563653536643232646638613063 Dec 16 03:53:09.626996 containerd[1640]: time="2025-12-16T03:53:09.626938848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xjdhd,Uid:2a73cfff-498a-4801-a9d9-f3b2fd31770b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb\"" Dec 16 03:53:09.635009 containerd[1640]: time="2025-12-16T03:53:09.634971423Z" level=info msg="CreateContainer within sandbox \"f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:53:09.651763 containerd[1640]: time="2025-12-16T03:53:09.651237550Z" level=info msg="Container 7aadb5d91357d0e635c349c1a8cd1c5fb68735516aecfab68f8a791b995381be: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:53:09.656173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount924990189.mount: Deactivated successfully. Dec 16 03:53:09.666478 containerd[1640]: time="2025-12-16T03:53:09.666412128Z" level=info msg="CreateContainer within sandbox \"f61f5674bf9b0fc5ce56d22df8a0c9a09ea87f7aba6853eb1d1bf7299f1bc0fb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7aadb5d91357d0e635c349c1a8cd1c5fb68735516aecfab68f8a791b995381be\"" Dec 16 03:53:09.667567 containerd[1640]: time="2025-12-16T03:53:09.667533470Z" level=info msg="StartContainer for \"7aadb5d91357d0e635c349c1a8cd1c5fb68735516aecfab68f8a791b995381be\"" Dec 16 03:53:09.669331 containerd[1640]: time="2025-12-16T03:53:09.669296774Z" level=info msg="connecting to shim 7aadb5d91357d0e635c349c1a8cd1c5fb68735516aecfab68f8a791b995381be" address="unix:///run/containerd/s/70359ce5b5e54190bced64007713a0e1546b7574cb99747817de8ecbd96972f2" protocol=ttrpc version=3 Dec 16 03:53:09.703012 systemd[1]: Started cri-containerd-7aadb5d91357d0e635c349c1a8cd1c5fb68735516aecfab68f8a791b995381be.scope - libcontainer container 7aadb5d91357d0e635c349c1a8cd1c5fb68735516aecfab68f8a791b995381be. Dec 16 03:53:09.735000 audit: BPF prog-id=220 op=LOAD Dec 16 03:53:09.737000 audit: BPF prog-id=221 op=LOAD Dec 16 03:53:09.737000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4664 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761616462356439313335376430653633356333343963316138636431 Dec 16 03:53:09.737000 audit: BPF prog-id=221 op=UNLOAD Dec 16 03:53:09.737000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4664 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761616462356439313335376430653633356333343963316138636431 Dec 16 03:53:09.737000 audit: BPF prog-id=222 op=LOAD Dec 16 03:53:09.737000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4664 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761616462356439313335376430653633356333343963316138636431 Dec 16 03:53:09.737000 audit: BPF prog-id=223 op=LOAD Dec 16 03:53:09.737000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4664 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761616462356439313335376430653633356333343963316138636431 Dec 16 03:53:09.737000 audit: BPF prog-id=223 op=UNLOAD Dec 16 03:53:09.737000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4664 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761616462356439313335376430653633356333343963316138636431 Dec 16 03:53:09.737000 audit: BPF prog-id=222 op=UNLOAD Dec 16 03:53:09.737000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4664 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761616462356439313335376430653633356333343963316138636431 Dec 16 03:53:09.738000 audit: BPF prog-id=224 op=LOAD Dec 16 03:53:09.738000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4664 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:09.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761616462356439313335376430653633356333343963316138636431 Dec 16 03:53:09.772272 containerd[1640]: time="2025-12-16T03:53:09.772204745Z" level=info msg="StartContainer for \"7aadb5d91357d0e635c349c1a8cd1c5fb68735516aecfab68f8a791b995381be\" returns successfully" Dec 16 03:53:10.192850 containerd[1640]: time="2025-12-16T03:53:10.192656225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76cd974fc5-ks52b,Uid:ada9a6ed-985f-4b6f-99d7-ef5caf58dd85,Namespace:calico-system,Attempt:0,}" Dec 16 03:53:10.399840 systemd-networkd[1555]: cali231128b4a19: Link UP Dec 16 03:53:10.401415 systemd-networkd[1555]: cali231128b4a19: Gained carrier Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.276 [INFO][4735] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n64tt.gb1.brightbox.com-k8s-calico--kube--controllers--76cd974fc5--ks52b-eth0 calico-kube-controllers-76cd974fc5- calico-system ada9a6ed-985f-4b6f-99d7-ef5caf58dd85 839 0 2025-12-16 03:52:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76cd974fc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-n64tt.gb1.brightbox.com calico-kube-controllers-76cd974fc5-ks52b eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali231128b4a19 [] [] }} ContainerID="4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" Namespace="calico-system" Pod="calico-kube-controllers-76cd974fc5-ks52b" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--kube--controllers--76cd974fc5--ks52b-" Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.276 [INFO][4735] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" Namespace="calico-system" Pod="calico-kube-controllers-76cd974fc5-ks52b" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--kube--controllers--76cd974fc5--ks52b-eth0" Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.334 [INFO][4746] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" HandleID="k8s-pod-network.4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" Workload="srv--n64tt.gb1.brightbox.com-k8s-calico--kube--controllers--76cd974fc5--ks52b-eth0" Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.335 [INFO][4746] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" HandleID="k8s-pod-network.4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" Workload="srv--n64tt.gb1.brightbox.com-k8s-calico--kube--controllers--76cd974fc5--ks52b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103900), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-n64tt.gb1.brightbox.com", "pod":"calico-kube-controllers-76cd974fc5-ks52b", "timestamp":"2025-12-16 03:53:10.334298694 +0000 UTC"}, Hostname:"srv-n64tt.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.335 [INFO][4746] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.335 [INFO][4746] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.335 [INFO][4746] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n64tt.gb1.brightbox.com' Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.353 [INFO][4746] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.360 [INFO][4746] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.368 [INFO][4746] ipam/ipam.go 511: Trying affinity for 192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.371 [INFO][4746] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.374 [INFO][4746] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.374 [INFO][4746] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.376 [INFO][4746] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.382 [INFO][4746] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.389 [INFO][4746] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.67/26] block=192.168.124.64/26 handle="k8s-pod-network.4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.389 [INFO][4746] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.67/26] handle="k8s-pod-network.4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.389 [INFO][4746] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:53:10.425688 containerd[1640]: 2025-12-16 03:53:10.389 [INFO][4746] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.67/26] IPv6=[] ContainerID="4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" HandleID="k8s-pod-network.4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" Workload="srv--n64tt.gb1.brightbox.com-k8s-calico--kube--controllers--76cd974fc5--ks52b-eth0" Dec 16 03:53:10.431281 containerd[1640]: 2025-12-16 03:53:10.394 [INFO][4735] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" Namespace="calico-system" Pod="calico-kube-controllers-76cd974fc5-ks52b" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--kube--controllers--76cd974fc5--ks52b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-calico--kube--controllers--76cd974fc5--ks52b-eth0", GenerateName:"calico-kube-controllers-76cd974fc5-", Namespace:"calico-system", SelfLink:"", UID:"ada9a6ed-985f-4b6f-99d7-ef5caf58dd85", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76cd974fc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-76cd974fc5-ks52b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali231128b4a19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:10.431281 containerd[1640]: 2025-12-16 03:53:10.395 [INFO][4735] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.67/32] ContainerID="4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" Namespace="calico-system" Pod="calico-kube-controllers-76cd974fc5-ks52b" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--kube--controllers--76cd974fc5--ks52b-eth0" Dec 16 03:53:10.431281 containerd[1640]: 2025-12-16 03:53:10.395 [INFO][4735] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali231128b4a19 ContainerID="4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" Namespace="calico-system" Pod="calico-kube-controllers-76cd974fc5-ks52b" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--kube--controllers--76cd974fc5--ks52b-eth0" Dec 16 03:53:10.431281 containerd[1640]: 2025-12-16 03:53:10.402 [INFO][4735] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" Namespace="calico-system" Pod="calico-kube-controllers-76cd974fc5-ks52b" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--kube--controllers--76cd974fc5--ks52b-eth0" Dec 16 03:53:10.431281 containerd[1640]: 2025-12-16 03:53:10.403 [INFO][4735] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" Namespace="calico-system" Pod="calico-kube-controllers-76cd974fc5-ks52b" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--kube--controllers--76cd974fc5--ks52b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-calico--kube--controllers--76cd974fc5--ks52b-eth0", GenerateName:"calico-kube-controllers-76cd974fc5-", Namespace:"calico-system", SelfLink:"", UID:"ada9a6ed-985f-4b6f-99d7-ef5caf58dd85", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76cd974fc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab", Pod:"calico-kube-controllers-76cd974fc5-ks52b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali231128b4a19", MAC:"9e:72:22:b7:ff:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:10.431281 containerd[1640]: 2025-12-16 03:53:10.418 [INFO][4735] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" Namespace="calico-system" Pod="calico-kube-controllers-76cd974fc5-ks52b" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--kube--controllers--76cd974fc5--ks52b-eth0" Dec 16 03:53:10.463000 audit[4760]: NETFILTER_CFG table=filter:128 family=2 entries=40 op=nft_register_chain pid=4760 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:53:10.463000 audit[4760]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7fff480911c0 a2=0 a3=7fff480911ac items=0 ppid=4432 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:10.463000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:53:10.485755 containerd[1640]: time="2025-12-16T03:53:10.485523070Z" level=info msg="connecting to shim 4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab" address="unix:///run/containerd/s/f5ce694735ae43db3808e7030995178a695e00f50ac9adf233cfeba877bd6c15" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:53:10.538984 systemd[1]: Started cri-containerd-4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab.scope - libcontainer container 4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab. Dec 16 03:53:10.559000 audit: BPF prog-id=225 op=LOAD Dec 16 03:53:10.560000 audit: BPF prog-id=226 op=LOAD Dec 16 03:53:10.560000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4768 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:10.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464373534306437383163653837356566383231666531303436633062 Dec 16 03:53:10.560000 audit: BPF prog-id=226 op=UNLOAD Dec 16 03:53:10.560000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4768 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:10.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464373534306437383163653837356566383231666531303436633062 Dec 16 03:53:10.560000 audit: BPF prog-id=227 op=LOAD Dec 16 03:53:10.560000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4768 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:10.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464373534306437383163653837356566383231666531303436633062 Dec 16 03:53:10.560000 audit: BPF prog-id=228 op=LOAD Dec 16 03:53:10.560000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4768 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:10.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464373534306437383163653837356566383231666531303436633062 Dec 16 03:53:10.560000 audit: BPF prog-id=228 op=UNLOAD Dec 16 03:53:10.560000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4768 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:10.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464373534306437383163653837356566383231666531303436633062 Dec 16 03:53:10.560000 audit: BPF prog-id=227 op=UNLOAD Dec 16 03:53:10.560000 audit[4781]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4768 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:10.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464373534306437383163653837356566383231666531303436633062 Dec 16 03:53:10.560000 audit: BPF prog-id=229 op=LOAD Dec 16 03:53:10.560000 audit[4781]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4768 pid=4781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:10.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464373534306437383163653837356566383231666531303436633062 Dec 16 03:53:10.617591 containerd[1640]: time="2025-12-16T03:53:10.617493186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76cd974fc5-ks52b,Uid:ada9a6ed-985f-4b6f-99d7-ef5caf58dd85,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d7540d781ce875ef821fe1046c0bc933cc1ffd13af3e0d1f81f25b3658e76ab\"" Dec 16 03:53:10.620854 containerd[1640]: time="2025-12-16T03:53:10.620818655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:53:10.689744 kubelet[2985]: I1216 03:53:10.687217 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xjdhd" podStartSLOduration=59.687194434 podStartE2EDuration="59.687194434s" podCreationTimestamp="2025-12-16 03:52:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:53:10.684856781 +0000 UTC m=+66.699483818" watchObservedRunningTime="2025-12-16 03:53:10.687194434 +0000 UTC m=+66.701821449" Dec 16 03:53:10.712665 systemd-networkd[1555]: cali4c39ea8d8e1: Gained IPv6LL Dec 16 03:53:10.730000 audit[4808]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:10.730000 audit[4808]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc070ca650 a2=0 a3=7ffc070ca63c items=0 ppid=3138 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:10.730000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:10.737000 audit[4808]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:10.737000 audit[4808]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc070ca650 a2=0 a3=0 items=0 ppid=3138 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:10.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:10.764000 audit[4810]: NETFILTER_CFG table=filter:131 family=2 entries=17 op=nft_register_rule pid=4810 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:10.764000 audit[4810]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc80a93560 a2=0 a3=7ffc80a9354c items=0 ppid=3138 pid=4810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:10.764000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:10.772000 audit[4810]: NETFILTER_CFG table=nat:132 family=2 entries=35 op=nft_register_chain pid=4810 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:10.772000 audit[4810]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc80a93560 a2=0 a3=7ffc80a9354c items=0 ppid=3138 pid=4810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:10.772000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:10.932280 containerd[1640]: time="2025-12-16T03:53:10.932013950Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:10.934634 containerd[1640]: time="2025-12-16T03:53:10.934467308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:53:10.934634 containerd[1640]: time="2025-12-16T03:53:10.934594270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:10.936968 kubelet[2985]: E1216 03:53:10.936918 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:53:10.937184 kubelet[2985]: E1216 03:53:10.937137 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:53:10.942760 kubelet[2985]: E1216 03:53:10.942595 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkjc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76cd974fc5-ks52b_calico-system(ada9a6ed-985f-4b6f-99d7-ef5caf58dd85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:10.944310 kubelet[2985]: E1216 03:53:10.944203 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" podUID="ada9a6ed-985f-4b6f-99d7-ef5caf58dd85" Dec 16 03:53:11.191783 containerd[1640]: time="2025-12-16T03:53:11.191690191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqv5z,Uid:ac3cb9e0-75ec-4604-aa8a-c0773489c9c8,Namespace:kube-system,Attempt:0,}" Dec 16 03:53:11.364828 systemd-networkd[1555]: cali93def0f8b92: Link UP Dec 16 03:53:11.366870 systemd-networkd[1555]: cali93def0f8b92: Gained carrier Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.253 [INFO][4812] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jqv5z-eth0 coredns-674b8bbfcf- kube-system ac3cb9e0-75ec-4604-aa8a-c0773489c9c8 837 0 2025-12-16 03:52:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-n64tt.gb1.brightbox.com coredns-674b8bbfcf-jqv5z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali93def0f8b92 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqv5z" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jqv5z-" Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.253 [INFO][4812] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqv5z" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jqv5z-eth0" Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.307 [INFO][4824] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" HandleID="k8s-pod-network.a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" Workload="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jqv5z-eth0" Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.307 [INFO][4824] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" HandleID="k8s-pod-network.a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" Workload="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jqv5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-n64tt.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-jqv5z", "timestamp":"2025-12-16 03:53:11.30725229 +0000 UTC"}, Hostname:"srv-n64tt.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.307 [INFO][4824] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.307 [INFO][4824] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.307 [INFO][4824] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n64tt.gb1.brightbox.com' Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.318 [INFO][4824] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.324 [INFO][4824] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.329 [INFO][4824] ipam/ipam.go 511: Trying affinity for 192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.332 [INFO][4824] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.335 [INFO][4824] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.335 [INFO][4824] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.337 [INFO][4824] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.343 [INFO][4824] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.354 [INFO][4824] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.68/26] block=192.168.124.64/26 handle="k8s-pod-network.a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.354 [INFO][4824] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.68/26] handle="k8s-pod-network.a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.355 [INFO][4824] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:53:11.388973 containerd[1640]: 2025-12-16 03:53:11.355 [INFO][4824] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.68/26] IPv6=[] ContainerID="a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" HandleID="k8s-pod-network.a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" Workload="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jqv5z-eth0" Dec 16 03:53:11.391963 containerd[1640]: 2025-12-16 03:53:11.358 [INFO][4812] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqv5z" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jqv5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jqv5z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ac3cb9e0-75ec-4604-aa8a-c0773489c9c8", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 52, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-jqv5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93def0f8b92", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:11.391963 containerd[1640]: 2025-12-16 03:53:11.359 [INFO][4812] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.68/32] ContainerID="a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqv5z" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jqv5z-eth0" Dec 16 03:53:11.391963 containerd[1640]: 2025-12-16 03:53:11.359 [INFO][4812] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93def0f8b92 ContainerID="a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqv5z" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jqv5z-eth0" Dec 16 03:53:11.391963 containerd[1640]: 2025-12-16 03:53:11.368 [INFO][4812] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqv5z" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jqv5z-eth0" Dec 16 03:53:11.391963 containerd[1640]: 2025-12-16 03:53:11.368 [INFO][4812] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqv5z" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jqv5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jqv5z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ac3cb9e0-75ec-4604-aa8a-c0773489c9c8", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 52, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e", Pod:"coredns-674b8bbfcf-jqv5z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93def0f8b92", MAC:"5a:b1:21:9b:81:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:11.391963 containerd[1640]: 2025-12-16 03:53:11.385 [INFO][4812] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqv5z" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jqv5z-eth0" Dec 16 03:53:11.429235 containerd[1640]: time="2025-12-16T03:53:11.428976722Z" level=info msg="connecting to shim a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e" address="unix:///run/containerd/s/1fcfd6338679dee3f2ffcaae3c6ba34b4569fd1447495d8ed2a4150750b8ffa6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:53:11.441000 audit[4855]: NETFILTER_CFG table=filter:133 family=2 entries=40 op=nft_register_chain pid=4855 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:53:11.441000 audit[4855]: SYSCALL arch=c000003e syscall=46 success=yes exit=20344 a0=3 a1=7ffcce9b5980 a2=0 a3=7ffcce9b596c items=0 ppid=4432 pid=4855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:11.441000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:53:11.483978 systemd[1]: Started cri-containerd-a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e.scope - libcontainer container a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e. Dec 16 03:53:11.508000 audit: BPF prog-id=230 op=LOAD Dec 16 03:53:11.509000 audit: BPF prog-id=231 op=LOAD Dec 16 03:53:11.509000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4850 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:11.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135663038633063316665306438643237303939343565643265343332 Dec 16 03:53:11.509000 audit: BPF prog-id=231 op=UNLOAD Dec 16 03:53:11.509000 audit[4862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4850 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:11.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135663038633063316665306438643237303939343565643265343332 Dec 16 03:53:11.510000 audit: BPF prog-id=232 op=LOAD Dec 16 03:53:11.510000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4850 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:11.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135663038633063316665306438643237303939343565643265343332 Dec 16 03:53:11.510000 audit: BPF prog-id=233 op=LOAD Dec 16 03:53:11.510000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4850 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:11.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135663038633063316665306438643237303939343565643265343332 Dec 16 03:53:11.510000 audit: BPF prog-id=233 op=UNLOAD Dec 16 03:53:11.510000 audit[4862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4850 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:11.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135663038633063316665306438643237303939343565643265343332 Dec 16 03:53:11.510000 audit: BPF prog-id=232 op=UNLOAD Dec 16 03:53:11.510000 audit[4862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4850 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:11.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135663038633063316665306438643237303939343565643265343332 Dec 16 03:53:11.510000 audit: BPF prog-id=234 op=LOAD Dec 16 03:53:11.510000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4850 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:11.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135663038633063316665306438643237303939343565643265343332 Dec 16 03:53:11.542925 systemd-networkd[1555]: cali231128b4a19: Gained IPv6LL Dec 16 03:53:11.572065 containerd[1640]: time="2025-12-16T03:53:11.571877896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqv5z,Uid:ac3cb9e0-75ec-4604-aa8a-c0773489c9c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e\"" Dec 16 03:53:11.579633 containerd[1640]: time="2025-12-16T03:53:11.579566634Z" level=info msg="CreateContainer within sandbox \"a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:53:11.590793 containerd[1640]: time="2025-12-16T03:53:11.590731215Z" level=info msg="Container ddfc1b0b75b80cc61cbd026294c287098dec388a8851189b2b561be1a5787acc: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:53:11.600966 containerd[1640]: time="2025-12-16T03:53:11.600880117Z" level=info msg="CreateContainer within sandbox \"a5f08c0c1fe0d8d2709945ed2e43254f997eb224808b5df0a4747888af331d5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ddfc1b0b75b80cc61cbd026294c287098dec388a8851189b2b561be1a5787acc\"" Dec 16 03:53:11.603430 containerd[1640]: time="2025-12-16T03:53:11.602227847Z" level=info msg="StartContainer for \"ddfc1b0b75b80cc61cbd026294c287098dec388a8851189b2b561be1a5787acc\"" Dec 16 03:53:11.603430 containerd[1640]: time="2025-12-16T03:53:11.603352816Z" level=info msg="connecting to shim ddfc1b0b75b80cc61cbd026294c287098dec388a8851189b2b561be1a5787acc" address="unix:///run/containerd/s/1fcfd6338679dee3f2ffcaae3c6ba34b4569fd1447495d8ed2a4150750b8ffa6" protocol=ttrpc version=3 Dec 16 03:53:11.639944 systemd[1]: Started cri-containerd-ddfc1b0b75b80cc61cbd026294c287098dec388a8851189b2b561be1a5787acc.scope - libcontainer container ddfc1b0b75b80cc61cbd026294c287098dec388a8851189b2b561be1a5787acc. Dec 16 03:53:11.671000 audit: BPF prog-id=235 op=LOAD Dec 16 03:53:11.675000 audit: BPF prog-id=236 op=LOAD Dec 16 03:53:11.675000 audit[4889]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4850 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:11.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464666331623062373562383063633631636264303236323934633238 Dec 16 03:53:11.675000 audit: BPF prog-id=236 op=UNLOAD Dec 16 03:53:11.675000 audit[4889]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4850 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:11.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464666331623062373562383063633631636264303236323934633238 Dec 16 03:53:11.676000 audit: BPF prog-id=237 op=LOAD Dec 16 03:53:11.676000 audit[4889]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4850 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:11.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464666331623062373562383063633631636264303236323934633238 Dec 16 03:53:11.676000 audit: BPF prog-id=238 op=LOAD Dec 16 03:53:11.676000 audit[4889]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4850 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:11.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464666331623062373562383063633631636264303236323934633238 Dec 16 03:53:11.676000 audit: BPF prog-id=238 op=UNLOAD Dec 16 03:53:11.676000 audit[4889]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4850 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:11.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464666331623062373562383063633631636264303236323934633238 Dec 16 03:53:11.676000 audit: BPF prog-id=237 op=UNLOAD Dec 16 03:53:11.676000 audit[4889]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4850 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:11.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464666331623062373562383063633631636264303236323934633238 Dec 16 03:53:11.676000 audit: BPF prog-id=239 op=LOAD Dec 16 03:53:11.676000 audit[4889]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4850 pid=4889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:11.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464666331623062373562383063633631636264303236323934633238 Dec 16 03:53:11.690699 kubelet[2985]: E1216 03:53:11.690533 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" podUID="ada9a6ed-985f-4b6f-99d7-ef5caf58dd85" Dec 16 03:53:11.728027 containerd[1640]: time="2025-12-16T03:53:11.727888094Z" level=info msg="StartContainer for \"ddfc1b0b75b80cc61cbd026294c287098dec388a8851189b2b561be1a5787acc\" returns successfully" Dec 16 03:53:12.714438 kubelet[2985]: I1216 03:53:12.714180 2985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jqv5z" podStartSLOduration=61.714160154 podStartE2EDuration="1m1.714160154s" podCreationTimestamp="2025-12-16 03:52:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:53:12.714106777 +0000 UTC m=+68.728733820" watchObservedRunningTime="2025-12-16 03:53:12.714160154 +0000 UTC m=+68.728787177" Dec 16 03:53:12.756000 audit[4932]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=4932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:12.756000 audit[4932]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd50fe670 a2=0 a3=7ffcd50fe65c items=0 ppid=3138 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:12.756000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:12.761000 audit[4932]: NETFILTER_CFG table=nat:135 family=2 entries=44 op=nft_register_rule pid=4932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:12.761000 audit[4932]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcd50fe670 a2=0 a3=7ffcd50fe65c items=0 ppid=3138 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:12.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:12.783000 audit[4934]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=4934 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:12.783000 audit[4934]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff3fcfcec0 a2=0 a3=7fff3fcfceac items=0 ppid=3138 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:12.783000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:12.808000 audit[4934]: NETFILTER_CFG table=nat:137 family=2 entries=56 op=nft_register_chain pid=4934 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:12.808000 audit[4934]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff3fcfcec0 a2=0 a3=7fff3fcfceac items=0 ppid=3138 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:12.808000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:13.191338 containerd[1640]: time="2025-12-16T03:53:13.191237652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64754b8886-wksvf,Uid:84a83236-0b05-4630-b32a-21eabf997946,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:53:13.362195 systemd-networkd[1555]: cali5a07fd4e7de: Link UP Dec 16 03:53:13.363915 systemd-networkd[1555]: cali5a07fd4e7de: Gained carrier Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.250 [INFO][4938] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--wksvf-eth0 calico-apiserver-64754b8886- calico-apiserver 84a83236-0b05-4630-b32a-21eabf997946 838 0 2025-12-16 03:52:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64754b8886 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-n64tt.gb1.brightbox.com calico-apiserver-64754b8886-wksvf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5a07fd4e7de [] [] }} ContainerID="80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-wksvf" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--wksvf-" Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.250 [INFO][4938] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-wksvf" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--wksvf-eth0" Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.303 [INFO][4949] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" HandleID="k8s-pod-network.80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" Workload="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--wksvf-eth0" Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.304 [INFO][4949] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" HandleID="k8s-pod-network.80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" Workload="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--wksvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-n64tt.gb1.brightbox.com", "pod":"calico-apiserver-64754b8886-wksvf", "timestamp":"2025-12-16 03:53:13.303775803 +0000 UTC"}, Hostname:"srv-n64tt.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.304 [INFO][4949] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.304 [INFO][4949] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.304 [INFO][4949] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n64tt.gb1.brightbox.com' Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.317 [INFO][4949] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.326 [INFO][4949] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.332 [INFO][4949] ipam/ipam.go 511: Trying affinity for 192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.334 [INFO][4949] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.337 [INFO][4949] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.338 [INFO][4949] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.340 [INFO][4949] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014 Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.345 [INFO][4949] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.354 [INFO][4949] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.69/26] block=192.168.124.64/26 handle="k8s-pod-network.80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.354 [INFO][4949] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.69/26] handle="k8s-pod-network.80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.354 [INFO][4949] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:53:13.388281 containerd[1640]: 2025-12-16 03:53:13.354 [INFO][4949] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.69/26] IPv6=[] ContainerID="80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" HandleID="k8s-pod-network.80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" Workload="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--wksvf-eth0" Dec 16 03:53:13.390560 containerd[1640]: 2025-12-16 03:53:13.357 [INFO][4938] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-wksvf" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--wksvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--wksvf-eth0", GenerateName:"calico-apiserver-64754b8886-", Namespace:"calico-apiserver", SelfLink:"", UID:"84a83236-0b05-4630-b32a-21eabf997946", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64754b8886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-64754b8886-wksvf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a07fd4e7de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:13.390560 containerd[1640]: 2025-12-16 03:53:13.358 [INFO][4938] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.69/32] ContainerID="80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-wksvf" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--wksvf-eth0" Dec 16 03:53:13.390560 containerd[1640]: 2025-12-16 03:53:13.358 [INFO][4938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a07fd4e7de ContainerID="80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-wksvf" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--wksvf-eth0" Dec 16 03:53:13.390560 containerd[1640]: 2025-12-16 03:53:13.364 [INFO][4938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-wksvf" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--wksvf-eth0" Dec 16 03:53:13.390560 containerd[1640]: 2025-12-16 03:53:13.365 [INFO][4938] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-wksvf" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--wksvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--wksvf-eth0", GenerateName:"calico-apiserver-64754b8886-", Namespace:"calico-apiserver", SelfLink:"", UID:"84a83236-0b05-4630-b32a-21eabf997946", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64754b8886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014", Pod:"calico-apiserver-64754b8886-wksvf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a07fd4e7de", MAC:"7a:13:61:01:0d:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:13.390560 containerd[1640]: 2025-12-16 03:53:13.385 [INFO][4938] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-wksvf" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--wksvf-eth0" Dec 16 03:53:13.399007 systemd-networkd[1555]: cali93def0f8b92: Gained IPv6LL Dec 16 03:53:13.426000 audit[4963]: NETFILTER_CFG table=filter:138 family=2 entries=62 op=nft_register_chain pid=4963 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:53:13.426000 audit[4963]: SYSCALL arch=c000003e syscall=46 success=yes exit=31772 a0=3 a1=7ffe5fd60a60 a2=0 a3=7ffe5fd60a4c items=0 ppid=4432 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:13.426000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:53:13.435188 containerd[1640]: time="2025-12-16T03:53:13.435038619Z" level=info msg="connecting to shim 80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014" address="unix:///run/containerd/s/ce176ef261371f08b69f0598165ead8958bf5ab926b7cf9d4b8653d128c31731" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:53:13.497053 systemd[1]: Started cri-containerd-80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014.scope - libcontainer container 80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014. Dec 16 03:53:13.516000 audit: BPF prog-id=240 op=LOAD Dec 16 03:53:13.518000 audit: BPF prog-id=241 op=LOAD Dec 16 03:53:13.518000 audit[4983]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4972 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:13.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830646465643834613536643637313732663464643763656230656639 Dec 16 03:53:13.518000 audit: BPF prog-id=241 op=UNLOAD Dec 16 03:53:13.518000 audit[4983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:13.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830646465643834613536643637313732663464643763656230656639 Dec 16 03:53:13.518000 audit: BPF prog-id=242 op=LOAD Dec 16 03:53:13.518000 audit[4983]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4972 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:13.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830646465643834613536643637313732663464643763656230656639 Dec 16 03:53:13.518000 audit: BPF prog-id=243 op=LOAD Dec 16 03:53:13.518000 audit[4983]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4972 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:13.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830646465643834613536643637313732663464643763656230656639 Dec 16 03:53:13.518000 audit: BPF prog-id=243 op=UNLOAD Dec 16 03:53:13.518000 audit[4983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:13.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830646465643834613536643637313732663464643763656230656639 Dec 16 03:53:13.518000 audit: BPF prog-id=242 op=UNLOAD Dec 16 03:53:13.518000 audit[4983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:13.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830646465643834613536643637313732663464643763656230656639 Dec 16 03:53:13.518000 audit: BPF prog-id=244 op=LOAD Dec 16 03:53:13.518000 audit[4983]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4972 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:13.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830646465643834613536643637313732663464643763656230656639 Dec 16 03:53:13.585653 containerd[1640]: time="2025-12-16T03:53:13.585571965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64754b8886-wksvf,Uid:84a83236-0b05-4630-b32a-21eabf997946,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"80dded84a56d67172f4dd7ceb0ef9b578857884c85312c36290092b96d319014\"" Dec 16 03:53:13.589822 containerd[1640]: time="2025-12-16T03:53:13.589790104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:53:13.907266 containerd[1640]: time="2025-12-16T03:53:13.907205942Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:13.908599 containerd[1640]: time="2025-12-16T03:53:13.908471302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:53:13.908599 containerd[1640]: time="2025-12-16T03:53:13.908541155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:13.909059 kubelet[2985]: E1216 03:53:13.908999 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:53:13.911281 kubelet[2985]: E1216 03:53:13.909072 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:53:13.911281 kubelet[2985]: E1216 03:53:13.909256 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7l2pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64754b8886-wksvf_calico-apiserver(84a83236-0b05-4630-b32a-21eabf997946): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:13.911281 kubelet[2985]: E1216 03:53:13.910808 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" podUID="84a83236-0b05-4630-b32a-21eabf997946" Dec 16 03:53:14.193323 containerd[1640]: time="2025-12-16T03:53:14.193148596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64754b8886-qt5xx,Uid:050887c2-252c-4b73-aa88-7211b3356790,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:53:14.362756 systemd-networkd[1555]: calia614029cea1: Link UP Dec 16 03:53:14.363560 systemd-networkd[1555]: calia614029cea1: Gained carrier Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.256 [INFO][5009] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--qt5xx-eth0 calico-apiserver-64754b8886- calico-apiserver 050887c2-252c-4b73-aa88-7211b3356790 841 0 2025-12-16 03:52:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64754b8886 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-n64tt.gb1.brightbox.com calico-apiserver-64754b8886-qt5xx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia614029cea1 [] [] }} ContainerID="5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-qt5xx" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--qt5xx-" Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.257 [INFO][5009] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-qt5xx" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--qt5xx-eth0" Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.297 [INFO][5020] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" HandleID="k8s-pod-network.5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" Workload="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--qt5xx-eth0" Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.297 [INFO][5020] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" HandleID="k8s-pod-network.5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" Workload="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--qt5xx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-n64tt.gb1.brightbox.com", "pod":"calico-apiserver-64754b8886-qt5xx", "timestamp":"2025-12-16 03:53:14.296999041 +0000 UTC"}, Hostname:"srv-n64tt.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.297 [INFO][5020] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.297 [INFO][5020] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.297 [INFO][5020] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n64tt.gb1.brightbox.com' Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.307 [INFO][5020] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.313 [INFO][5020] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.322 [INFO][5020] ipam/ipam.go 511: Trying affinity for 192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.324 [INFO][5020] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.327 [INFO][5020] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.329 [INFO][5020] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.333 [INFO][5020] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.340 [INFO][5020] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.347 [INFO][5020] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.70/26] block=192.168.124.64/26 handle="k8s-pod-network.5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.347 [INFO][5020] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.70/26] handle="k8s-pod-network.5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.348 [INFO][5020] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:53:14.387939 containerd[1640]: 2025-12-16 03:53:14.350 [INFO][5020] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.70/26] IPv6=[] ContainerID="5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" HandleID="k8s-pod-network.5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" Workload="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--qt5xx-eth0" Dec 16 03:53:14.391411 containerd[1640]: 2025-12-16 03:53:14.356 [INFO][5009] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-qt5xx" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--qt5xx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--qt5xx-eth0", GenerateName:"calico-apiserver-64754b8886-", Namespace:"calico-apiserver", SelfLink:"", UID:"050887c2-252c-4b73-aa88-7211b3356790", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64754b8886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-64754b8886-qt5xx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia614029cea1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:14.391411 containerd[1640]: 2025-12-16 03:53:14.357 [INFO][5009] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.70/32] ContainerID="5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-qt5xx" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--qt5xx-eth0" Dec 16 03:53:14.391411 containerd[1640]: 2025-12-16 03:53:14.357 [INFO][5009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia614029cea1 ContainerID="5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-qt5xx" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--qt5xx-eth0" Dec 16 03:53:14.391411 containerd[1640]: 2025-12-16 03:53:14.364 [INFO][5009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-qt5xx" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--qt5xx-eth0" Dec 16 03:53:14.391411 containerd[1640]: 2025-12-16 03:53:14.364 [INFO][5009] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-qt5xx" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--qt5xx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--qt5xx-eth0", GenerateName:"calico-apiserver-64754b8886-", Namespace:"calico-apiserver", SelfLink:"", UID:"050887c2-252c-4b73-aa88-7211b3356790", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 52, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64754b8886", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa", Pod:"calico-apiserver-64754b8886-qt5xx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia614029cea1", MAC:"2e:7d:9a:b4:36:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:14.391411 containerd[1640]: 2025-12-16 03:53:14.383 [INFO][5009] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" Namespace="calico-apiserver" Pod="calico-apiserver-64754b8886-qt5xx" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-calico--apiserver--64754b8886--qt5xx-eth0" Dec 16 03:53:14.427000 audit[5035]: NETFILTER_CFG table=filter:139 family=2 entries=53 op=nft_register_chain pid=5035 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:53:14.427000 audit[5035]: SYSCALL arch=c000003e syscall=46 success=yes exit=26640 a0=3 a1=7ffd2ca37be0 a2=0 a3=7ffd2ca37bcc items=0 ppid=4432 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:14.427000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:53:14.441223 containerd[1640]: time="2025-12-16T03:53:14.441160065Z" level=info msg="connecting to shim 5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa" address="unix:///run/containerd/s/1709897ad915a3b710e0f84f4adbd820bd7848c2f39a187814a67f32b946b4fc" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:53:14.506047 systemd[1]: Started cri-containerd-5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa.scope - libcontainer container 5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa. Dec 16 03:53:14.527000 audit: BPF prog-id=245 op=LOAD Dec 16 03:53:14.531288 kernel: kauditd_printk_skb: 161 callbacks suppressed Dec 16 03:53:14.531392 kernel: audit: type=1334 audit(1765857194.527:720): prog-id=245 op=LOAD Dec 16 03:53:14.533000 audit: BPF prog-id=246 op=LOAD Dec 16 03:53:14.535778 kernel: audit: type=1334 audit(1765857194.533:721): prog-id=246 op=LOAD Dec 16 03:53:14.533000 audit[5055]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5044 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:14.542854 kernel: audit: type=1300 audit(1765857194.533:721): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5044 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:14.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363266326231333439316263633063666636616237386639373839 Dec 16 03:53:14.533000 audit: BPF prog-id=246 op=UNLOAD Dec 16 03:53:14.550487 kernel: audit: type=1327 audit(1765857194.533:721): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363266326231333439316263633063666636616237386639373839 Dec 16 03:53:14.550692 kernel: audit: type=1334 audit(1765857194.533:722): prog-id=246 op=UNLOAD Dec 16 03:53:14.550862 kernel: audit: type=1300 audit(1765857194.533:722): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5044 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:14.533000 audit[5055]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5044 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:14.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363266326231333439316263633063666636616237386639373839 Dec 16 03:53:14.558094 kernel: audit: type=1327 audit(1765857194.533:722): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363266326231333439316263633063666636616237386639373839 Dec 16 03:53:14.533000 audit: BPF prog-id=247 op=LOAD Dec 16 03:53:14.562264 kernel: audit: type=1334 audit(1765857194.533:723): prog-id=247 op=LOAD Dec 16 03:53:14.533000 audit[5055]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5044 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:14.565219 kernel: audit: type=1300 audit(1765857194.533:723): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5044 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:14.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363266326231333439316263633063666636616237386639373839 Dec 16 03:53:14.570794 kernel: audit: type=1327 audit(1765857194.533:723): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363266326231333439316263633063666636616237386639373839 Dec 16 03:53:14.533000 audit: BPF prog-id=248 op=LOAD Dec 16 03:53:14.533000 audit[5055]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5044 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:14.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363266326231333439316263633063666636616237386639373839 Dec 16 03:53:14.533000 audit: BPF prog-id=248 op=UNLOAD Dec 16 03:53:14.533000 audit[5055]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5044 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:14.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363266326231333439316263633063666636616237386639373839 Dec 16 03:53:14.533000 audit: BPF prog-id=247 op=UNLOAD Dec 16 03:53:14.533000 audit[5055]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5044 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:14.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363266326231333439316263633063666636616237386639373839 Dec 16 03:53:14.533000 audit: BPF prog-id=249 op=LOAD Dec 16 03:53:14.533000 audit[5055]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5044 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:14.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363266326231333439316263633063666636616237386639373839 Dec 16 03:53:14.622850 containerd[1640]: time="2025-12-16T03:53:14.622745236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64754b8886-qt5xx,Uid:050887c2-252c-4b73-aa88-7211b3356790,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5262f2b13491bcc0cff6ab78f97892a9fa3d87326967ffe5a067686f6fbff7fa\"" Dec 16 03:53:14.625752 containerd[1640]: time="2025-12-16T03:53:14.625706789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:53:14.707922 kubelet[2985]: E1216 03:53:14.707863 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" podUID="84a83236-0b05-4630-b32a-21eabf997946" Dec 16 03:53:14.744000 audit[5083]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:14.744000 audit[5083]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffef68a3920 a2=0 a3=7ffef68a390c items=0 ppid=3138 pid=5083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:14.744000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:14.754000 audit[5083]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:14.754000 audit[5083]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffef68a3920 a2=0 a3=7ffef68a390c items=0 ppid=3138 pid=5083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:14.754000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:14.807171 systemd-networkd[1555]: cali5a07fd4e7de: Gained IPv6LL Dec 16 03:53:14.939568 containerd[1640]: time="2025-12-16T03:53:14.939335579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:14.941973 containerd[1640]: time="2025-12-16T03:53:14.940696074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:53:14.941973 containerd[1640]: time="2025-12-16T03:53:14.940783324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:14.942099 kubelet[2985]: E1216 03:53:14.942058 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:53:14.942891 kubelet[2985]: E1216 03:53:14.942114 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:53:14.942891 kubelet[2985]: E1216 03:53:14.942282 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-psz9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64754b8886-qt5xx_calico-apiserver(050887c2-252c-4b73-aa88-7211b3356790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:14.944066 kubelet[2985]: E1216 03:53:14.943824 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" podUID="050887c2-252c-4b73-aa88-7211b3356790" Dec 16 03:53:15.192086 containerd[1640]: time="2025-12-16T03:53:15.191928502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4px76,Uid:68135f19-2992-417d-b99b-f5dddddbe1d3,Namespace:calico-system,Attempt:0,}" Dec 16 03:53:15.386587 systemd-networkd[1555]: cali1558a23f735: Link UP Dec 16 03:53:15.389274 systemd-networkd[1555]: cali1558a23f735: Gained carrier Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.264 [INFO][5085] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n64tt.gb1.brightbox.com-k8s-csi--node--driver--4px76-eth0 csi-node-driver- calico-system 68135f19-2992-417d-b99b-f5dddddbe1d3 724 0 2025-12-16 03:52:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-n64tt.gb1.brightbox.com csi-node-driver-4px76 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1558a23f735 [] [] }} ContainerID="6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" Namespace="calico-system" Pod="csi-node-driver-4px76" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-csi--node--driver--4px76-" Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.264 [INFO][5085] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" Namespace="calico-system" Pod="csi-node-driver-4px76" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-csi--node--driver--4px76-eth0" Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.319 [INFO][5097] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" HandleID="k8s-pod-network.6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" Workload="srv--n64tt.gb1.brightbox.com-k8s-csi--node--driver--4px76-eth0" Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.319 [INFO][5097] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" HandleID="k8s-pod-network.6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" Workload="srv--n64tt.gb1.brightbox.com-k8s-csi--node--driver--4px76-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef50), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-n64tt.gb1.brightbox.com", "pod":"csi-node-driver-4px76", "timestamp":"2025-12-16 03:53:15.319100428 +0000 UTC"}, Hostname:"srv-n64tt.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.319 [INFO][5097] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.319 [INFO][5097] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.319 [INFO][5097] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n64tt.gb1.brightbox.com' Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.330 [INFO][5097] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.337 [INFO][5097] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.345 [INFO][5097] ipam/ipam.go 511: Trying affinity for 192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.349 [INFO][5097] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.352 [INFO][5097] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.352 [INFO][5097] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.355 [INFO][5097] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.362 [INFO][5097] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.372 [INFO][5097] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.71/26] block=192.168.124.64/26 handle="k8s-pod-network.6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.372 [INFO][5097] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.71/26] handle="k8s-pod-network.6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.372 [INFO][5097] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:53:15.425321 containerd[1640]: 2025-12-16 03:53:15.372 [INFO][5097] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.71/26] IPv6=[] ContainerID="6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" HandleID="k8s-pod-network.6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" Workload="srv--n64tt.gb1.brightbox.com-k8s-csi--node--driver--4px76-eth0" Dec 16 03:53:15.429293 containerd[1640]: 2025-12-16 03:53:15.376 [INFO][5085] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" Namespace="calico-system" Pod="csi-node-driver-4px76" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-csi--node--driver--4px76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-csi--node--driver--4px76-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68135f19-2992-417d-b99b-f5dddddbe1d3", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-4px76", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1558a23f735", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:15.429293 containerd[1640]: 2025-12-16 03:53:15.376 [INFO][5085] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.71/32] ContainerID="6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" Namespace="calico-system" Pod="csi-node-driver-4px76" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-csi--node--driver--4px76-eth0" Dec 16 03:53:15.429293 containerd[1640]: 2025-12-16 03:53:15.376 [INFO][5085] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1558a23f735 ContainerID="6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" Namespace="calico-system" Pod="csi-node-driver-4px76" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-csi--node--driver--4px76-eth0" Dec 16 03:53:15.429293 containerd[1640]: 2025-12-16 03:53:15.390 [INFO][5085] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" Namespace="calico-system" Pod="csi-node-driver-4px76" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-csi--node--driver--4px76-eth0" Dec 16 03:53:15.429293 containerd[1640]: 2025-12-16 03:53:15.393 [INFO][5085] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" Namespace="calico-system" Pod="csi-node-driver-4px76" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-csi--node--driver--4px76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-csi--node--driver--4px76-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68135f19-2992-417d-b99b-f5dddddbe1d3", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 52, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc", Pod:"csi-node-driver-4px76", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1558a23f735", MAC:"7a:dd:25:98:6e:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:15.429293 containerd[1640]: 2025-12-16 03:53:15.421 [INFO][5085] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" Namespace="calico-system" Pod="csi-node-driver-4px76" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-csi--node--driver--4px76-eth0" Dec 16 03:53:15.480578 containerd[1640]: time="2025-12-16T03:53:15.479953799Z" level=info msg="connecting to shim 6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc" address="unix:///run/containerd/s/194cdc2b2fd65912ec865d259035bbf413eb877689b54c3fd8326f30e09dbf45" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:53:15.511899 systemd-networkd[1555]: calia614029cea1: Gained IPv6LL Dec 16 03:53:15.543018 systemd[1]: Started cri-containerd-6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc.scope - libcontainer container 6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc. Dec 16 03:53:15.627000 audit: BPF prog-id=250 op=LOAD Dec 16 03:53:15.629000 audit: BPF prog-id=251 op=LOAD Dec 16 03:53:15.629000 audit[5128]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5117 pid=5128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:15.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661346332303266623538346261303563366637326562623162656638 Dec 16 03:53:15.629000 audit: BPF prog-id=251 op=UNLOAD Dec 16 03:53:15.629000 audit[5128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5117 pid=5128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:15.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661346332303266623538346261303563366637326562623162656638 Dec 16 03:53:15.629000 audit: BPF prog-id=252 op=LOAD Dec 16 03:53:15.629000 audit[5128]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5117 pid=5128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:15.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661346332303266623538346261303563366637326562623162656638 Dec 16 03:53:15.630000 audit: BPF prog-id=253 op=LOAD Dec 16 03:53:15.630000 audit[5128]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5117 pid=5128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:15.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661346332303266623538346261303563366637326562623162656638 Dec 16 03:53:15.631000 audit: BPF prog-id=253 op=UNLOAD Dec 16 03:53:15.631000 audit[5128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5117 pid=5128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:15.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661346332303266623538346261303563366637326562623162656638 Dec 16 03:53:15.631000 audit: BPF prog-id=252 op=UNLOAD Dec 16 03:53:15.631000 audit[5128]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5117 pid=5128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:15.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661346332303266623538346261303563366637326562623162656638 Dec 16 03:53:15.631000 audit: BPF prog-id=254 op=LOAD Dec 16 03:53:15.631000 audit[5128]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5117 pid=5128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:15.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661346332303266623538346261303563366637326562623162656638 Dec 16 03:53:15.674000 audit[5150]: NETFILTER_CFG table=filter:142 family=2 entries=56 op=nft_register_chain pid=5150 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:53:15.674000 audit[5150]: SYSCALL arch=c000003e syscall=46 success=yes exit=25516 a0=3 a1=7ffeda648a60 a2=0 a3=7ffeda648a4c items=0 ppid=4432 pid=5150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:15.674000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:53:15.692981 containerd[1640]: time="2025-12-16T03:53:15.692933541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4px76,Uid:68135f19-2992-417d-b99b-f5dddddbe1d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a4c202fb584ba05c6f72ebb1bef8f699a8c5016d203555371f837963f0e43dc\"" Dec 16 03:53:15.696644 containerd[1640]: time="2025-12-16T03:53:15.696558031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:53:15.712395 kubelet[2985]: E1216 03:53:15.712309 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" podUID="050887c2-252c-4b73-aa88-7211b3356790" Dec 16 03:53:15.753000 audit[5158]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:15.753000 audit[5158]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff6cbd5cf0 a2=0 a3=7fff6cbd5cdc items=0 ppid=3138 pid=5158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:15.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:15.766000 audit[5158]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:15.766000 audit[5158]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff6cbd5cf0 a2=0 a3=7fff6cbd5cdc items=0 ppid=3138 pid=5158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:15.766000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:16.017238 containerd[1640]: time="2025-12-16T03:53:16.017167503Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:16.018650 containerd[1640]: time="2025-12-16T03:53:16.018595901Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:53:16.018790 containerd[1640]: time="2025-12-16T03:53:16.018738573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:16.019339 kubelet[2985]: E1216 03:53:16.019226 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:53:16.020186 kubelet[2985]: E1216 03:53:16.019311 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:53:16.021454 kubelet[2985]: E1216 03:53:16.021383 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrplh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4px76_calico-system(68135f19-2992-417d-b99b-f5dddddbe1d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:16.025264 containerd[1640]: time="2025-12-16T03:53:16.025214604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:53:16.205060 containerd[1640]: time="2025-12-16T03:53:16.204990365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ngtdd,Uid:d650a02a-f60f-480c-a3b2-4b144ff8489f,Namespace:calico-system,Attempt:0,}" Dec 16 03:53:16.339562 containerd[1640]: time="2025-12-16T03:53:16.339365024Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:16.341097 containerd[1640]: time="2025-12-16T03:53:16.341004522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:53:16.341298 containerd[1640]: time="2025-12-16T03:53:16.341105577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:16.358835 kubelet[2985]: E1216 03:53:16.358760 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:53:16.358835 kubelet[2985]: E1216 03:53:16.358837 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:53:16.360617 kubelet[2985]: E1216 03:53:16.359863 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrplh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4px76_calico-system(68135f19-2992-417d-b99b-f5dddddbe1d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:16.362213 kubelet[2985]: E1216 03:53:16.362010 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:53:16.417335 systemd-networkd[1555]: cali14edbd48f7b: Link UP Dec 16 03:53:16.418694 systemd-networkd[1555]: cali14edbd48f7b: Gained carrier Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.272 [INFO][5160] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n64tt.gb1.brightbox.com-k8s-goldmane--666569f655--ngtdd-eth0 goldmane-666569f655- calico-system d650a02a-f60f-480c-a3b2-4b144ff8489f 843 0 2025-12-16 03:52:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-n64tt.gb1.brightbox.com goldmane-666569f655-ngtdd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali14edbd48f7b [] [] }} ContainerID="876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" Namespace="calico-system" Pod="goldmane-666569f655-ngtdd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-goldmane--666569f655--ngtdd-" Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.273 [INFO][5160] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" Namespace="calico-system" Pod="goldmane-666569f655-ngtdd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-goldmane--666569f655--ngtdd-eth0" Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.343 [INFO][5173] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" HandleID="k8s-pod-network.876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" Workload="srv--n64tt.gb1.brightbox.com-k8s-goldmane--666569f655--ngtdd-eth0" Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.343 [INFO][5173] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" HandleID="k8s-pod-network.876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" Workload="srv--n64tt.gb1.brightbox.com-k8s-goldmane--666569f655--ngtdd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-n64tt.gb1.brightbox.com", "pod":"goldmane-666569f655-ngtdd", "timestamp":"2025-12-16 03:53:16.343565187 +0000 UTC"}, Hostname:"srv-n64tt.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.343 [INFO][5173] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.343 [INFO][5173] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.344 [INFO][5173] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n64tt.gb1.brightbox.com' Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.360 [INFO][5173] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.367 [INFO][5173] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.376 [INFO][5173] ipam/ipam.go 511: Trying affinity for 192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.381 [INFO][5173] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.388 [INFO][5173] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.388 [INFO][5173] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.391 [INFO][5173] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.396 [INFO][5173] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.407 [INFO][5173] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.72/26] block=192.168.124.64/26 handle="k8s-pod-network.876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.407 [INFO][5173] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.72/26] handle="k8s-pod-network.876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" host="srv-n64tt.gb1.brightbox.com" Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.407 [INFO][5173] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:53:16.447943 containerd[1640]: 2025-12-16 03:53:16.408 [INFO][5173] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.72/26] IPv6=[] ContainerID="876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" HandleID="k8s-pod-network.876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" Workload="srv--n64tt.gb1.brightbox.com-k8s-goldmane--666569f655--ngtdd-eth0" Dec 16 03:53:16.451079 containerd[1640]: 2025-12-16 03:53:16.411 [INFO][5160] cni-plugin/k8s.go 418: Populated endpoint ContainerID="876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" Namespace="calico-system" Pod="goldmane-666569f655-ngtdd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-goldmane--666569f655--ngtdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-goldmane--666569f655--ngtdd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d650a02a-f60f-480c-a3b2-4b144ff8489f", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 52, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-ngtdd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14edbd48f7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:16.451079 containerd[1640]: 2025-12-16 03:53:16.411 [INFO][5160] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.72/32] ContainerID="876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" Namespace="calico-system" Pod="goldmane-666569f655-ngtdd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-goldmane--666569f655--ngtdd-eth0" Dec 16 03:53:16.451079 containerd[1640]: 2025-12-16 03:53:16.411 [INFO][5160] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14edbd48f7b ContainerID="876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" Namespace="calico-system" Pod="goldmane-666569f655-ngtdd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-goldmane--666569f655--ngtdd-eth0" Dec 16 03:53:16.451079 containerd[1640]: 2025-12-16 03:53:16.419 [INFO][5160] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" Namespace="calico-system" Pod="goldmane-666569f655-ngtdd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-goldmane--666569f655--ngtdd-eth0" Dec 16 03:53:16.451079 containerd[1640]: 2025-12-16 03:53:16.420 [INFO][5160] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" Namespace="calico-system" Pod="goldmane-666569f655-ngtdd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-goldmane--666569f655--ngtdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n64tt.gb1.brightbox.com-k8s-goldmane--666569f655--ngtdd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d650a02a-f60f-480c-a3b2-4b144ff8489f", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 52, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n64tt.gb1.brightbox.com", ContainerID:"876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c", Pod:"goldmane-666569f655-ngtdd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali14edbd48f7b", MAC:"2a:54:3a:f6:ac:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:53:16.451079 containerd[1640]: 2025-12-16 03:53:16.442 [INFO][5160] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" Namespace="calico-system" Pod="goldmane-666569f655-ngtdd" WorkloadEndpoint="srv--n64tt.gb1.brightbox.com-k8s-goldmane--666569f655--ngtdd-eth0" Dec 16 03:53:16.495000 audit[5192]: NETFILTER_CFG table=filter:145 family=2 entries=74 op=nft_register_chain pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:53:16.495000 audit[5192]: SYSCALL arch=c000003e syscall=46 success=yes exit=35160 a0=3 a1=7ffcb9fa8c50 a2=0 a3=7ffcb9fa8c3c items=0 ppid=4432 pid=5192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:16.495000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:53:16.511008 containerd[1640]: time="2025-12-16T03:53:16.510932805Z" level=info msg="connecting to shim 876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c" address="unix:///run/containerd/s/bdb704fb9227a3a942eac4c977467137052a7abb4f9e8dce0abf934547f3593d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:53:16.570089 systemd[1]: Started cri-containerd-876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c.scope - libcontainer container 876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c. Dec 16 03:53:16.593000 audit: BPF prog-id=255 op=LOAD Dec 16 03:53:16.596000 audit: BPF prog-id=256 op=LOAD Dec 16 03:53:16.596000 audit[5211]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5200 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:16.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366433373131343339643732343963626231346465393363376466 Dec 16 03:53:16.596000 audit: BPF prog-id=256 op=UNLOAD Dec 16 03:53:16.596000 audit[5211]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5200 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:16.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366433373131343339643732343963626231346465393363376466 Dec 16 03:53:16.596000 audit: BPF prog-id=257 op=LOAD Dec 16 03:53:16.596000 audit[5211]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5200 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:16.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366433373131343339643732343963626231346465393363376466 Dec 16 03:53:16.596000 audit: BPF prog-id=258 op=LOAD Dec 16 03:53:16.596000 audit[5211]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5200 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:16.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366433373131343339643732343963626231346465393363376466 Dec 16 03:53:16.596000 audit: BPF prog-id=258 op=UNLOAD Dec 16 03:53:16.596000 audit[5211]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5200 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:16.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366433373131343339643732343963626231346465393363376466 Dec 16 03:53:16.596000 audit: BPF prog-id=257 op=UNLOAD Dec 16 03:53:16.596000 audit[5211]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5200 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:16.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366433373131343339643732343963626231346465393363376466 Dec 16 03:53:16.596000 audit: BPF prog-id=259 op=LOAD Dec 16 03:53:16.596000 audit[5211]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5200 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:16.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837366433373131343339643732343963626231346465393363376466 Dec 16 03:53:16.675554 containerd[1640]: time="2025-12-16T03:53:16.675417986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ngtdd,Uid:d650a02a-f60f-480c-a3b2-4b144ff8489f,Namespace:calico-system,Attempt:0,} returns sandbox id \"876d3711439d7249cbb14de93c7df53d4a0ac38d9d0386695ce4d1cd6afe326c\"" Dec 16 03:53:16.679391 containerd[1640]: time="2025-12-16T03:53:16.679328996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:53:16.718275 kubelet[2985]: E1216 03:53:16.718202 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:53:16.987854 containerd[1640]: time="2025-12-16T03:53:16.987651994Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:16.989517 containerd[1640]: time="2025-12-16T03:53:16.989374407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:53:16.989517 containerd[1640]: time="2025-12-16T03:53:16.989466343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:16.990004 kubelet[2985]: E1216 03:53:16.989676 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:53:16.990004 kubelet[2985]: E1216 03:53:16.989764 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:53:16.990205 kubelet[2985]: E1216 03:53:16.989983 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nstgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ngtdd_calico-system(d650a02a-f60f-480c-a3b2-4b144ff8489f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:16.991518 kubelet[2985]: E1216 03:53:16.991472 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ngtdd" podUID="d650a02a-f60f-480c-a3b2-4b144ff8489f" Dec 16 03:53:17.239069 systemd-networkd[1555]: cali1558a23f735: Gained IPv6LL Dec 16 03:53:17.623159 systemd-networkd[1555]: cali14edbd48f7b: Gained IPv6LL Dec 16 03:53:17.723516 kubelet[2985]: E1216 03:53:17.723180 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ngtdd" podUID="d650a02a-f60f-480c-a3b2-4b144ff8489f" Dec 16 03:53:17.765000 audit[5244]: NETFILTER_CFG table=filter:146 family=2 entries=14 op=nft_register_rule pid=5244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:17.765000 audit[5244]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe564fbdc0 a2=0 a3=7ffe564fbdac items=0 ppid=3138 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:17.765000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:17.770000 audit[5244]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:53:17.770000 audit[5244]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe564fbdc0 a2=0 a3=7ffe564fbdac items=0 ppid=3138 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:17.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:53:18.194160 containerd[1640]: time="2025-12-16T03:53:18.193055946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:53:18.500232 containerd[1640]: time="2025-12-16T03:53:18.499884296Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:18.501251 containerd[1640]: time="2025-12-16T03:53:18.501089778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:53:18.501251 containerd[1640]: time="2025-12-16T03:53:18.501200873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:18.501536 kubelet[2985]: E1216 03:53:18.501440 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:53:18.501690 kubelet[2985]: E1216 03:53:18.501542 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:53:18.501833 kubelet[2985]: E1216 03:53:18.501778 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a60c07445d084b0f859deb3c2d6c59e4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tlhfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9867c46-mhs6d_calico-system(cc879cee-dcea-4d9b-a071-304996197deb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:18.504831 containerd[1640]: time="2025-12-16T03:53:18.504782638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:53:18.814174 containerd[1640]: time="2025-12-16T03:53:18.814079041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:18.815312 containerd[1640]: time="2025-12-16T03:53:18.815263921Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:53:18.815489 containerd[1640]: time="2025-12-16T03:53:18.815271322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:18.815801 kubelet[2985]: E1216 03:53:18.815702 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:53:18.816587 kubelet[2985]: E1216 03:53:18.816302 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:53:18.816587 kubelet[2985]: E1216 03:53:18.816510 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlhfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9867c46-mhs6d_calico-system(cc879cee-dcea-4d9b-a071-304996197deb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:18.818098 kubelet[2985]: E1216 03:53:18.817696 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9867c46-mhs6d" podUID="cc879cee-dcea-4d9b-a071-304996197deb" Dec 16 03:53:22.193521 containerd[1640]: time="2025-12-16T03:53:22.192853296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:53:22.514101 containerd[1640]: time="2025-12-16T03:53:22.513929897Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:22.517767 containerd[1640]: time="2025-12-16T03:53:22.517706978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:53:22.517858 containerd[1640]: time="2025-12-16T03:53:22.517837148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:22.518028 kubelet[2985]: E1216 03:53:22.517994 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:53:22.518787 kubelet[2985]: E1216 03:53:22.518060 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:53:22.518787 kubelet[2985]: E1216 03:53:22.518226 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkjc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76cd974fc5-ks52b_calico-system(ada9a6ed-985f-4b6f-99d7-ef5caf58dd85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:22.519566 kubelet[2985]: E1216 03:53:22.519497 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" podUID="ada9a6ed-985f-4b6f-99d7-ef5caf58dd85" Dec 16 03:53:30.195782 containerd[1640]: time="2025-12-16T03:53:30.195428089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:53:30.507273 containerd[1640]: time="2025-12-16T03:53:30.507085363Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:30.508258 containerd[1640]: time="2025-12-16T03:53:30.508210343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:53:30.508350 containerd[1640]: time="2025-12-16T03:53:30.508317235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:30.508986 containerd[1640]: time="2025-12-16T03:53:30.508938580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:53:30.509054 kubelet[2985]: E1216 03:53:30.508500 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:53:30.509054 kubelet[2985]: E1216 03:53:30.508564 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:53:30.510343 kubelet[2985]: E1216 03:53:30.509489 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7l2pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64754b8886-wksvf_calico-apiserver(84a83236-0b05-4630-b32a-21eabf997946): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:30.511646 kubelet[2985]: E1216 03:53:30.511338 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" podUID="84a83236-0b05-4630-b32a-21eabf997946" Dec 16 03:53:30.818644 containerd[1640]: time="2025-12-16T03:53:30.818590688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:30.819750 containerd[1640]: time="2025-12-16T03:53:30.819691348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:53:30.819836 containerd[1640]: time="2025-12-16T03:53:30.819704845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:30.820220 kubelet[2985]: E1216 03:53:30.820164 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:53:30.820436 kubelet[2985]: E1216 03:53:30.820235 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:53:30.820980 kubelet[2985]: E1216 03:53:30.820817 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nstgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ngtdd_calico-system(d650a02a-f60f-480c-a3b2-4b144ff8489f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:30.821534 containerd[1640]: time="2025-12-16T03:53:30.821500256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:53:30.822496 kubelet[2985]: E1216 03:53:30.822402 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ngtdd" podUID="d650a02a-f60f-480c-a3b2-4b144ff8489f" Dec 16 03:53:31.186799 containerd[1640]: time="2025-12-16T03:53:31.186640214Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:31.187835 containerd[1640]: time="2025-12-16T03:53:31.187787871Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:53:31.188298 containerd[1640]: time="2025-12-16T03:53:31.187896373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:31.188371 kubelet[2985]: E1216 03:53:31.188139 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:53:31.188371 kubelet[2985]: E1216 03:53:31.188197 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:53:31.188588 kubelet[2985]: E1216 03:53:31.188503 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-psz9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64754b8886-qt5xx_calico-apiserver(050887c2-252c-4b73-aa88-7211b3356790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:31.189016 containerd[1640]: time="2025-12-16T03:53:31.188986061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:53:31.189815 kubelet[2985]: E1216 03:53:31.189776 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" podUID="050887c2-252c-4b73-aa88-7211b3356790" Dec 16 03:53:31.505204 containerd[1640]: time="2025-12-16T03:53:31.504984525Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:31.508650 containerd[1640]: time="2025-12-16T03:53:31.506391989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:53:31.508650 containerd[1640]: time="2025-12-16T03:53:31.506455440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:31.508800 kubelet[2985]: E1216 03:53:31.506829 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:53:31.508800 kubelet[2985]: E1216 03:53:31.506894 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:53:31.508800 kubelet[2985]: E1216 03:53:31.507087 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrplh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4px76_calico-system(68135f19-2992-417d-b99b-f5dddddbe1d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:31.511067 containerd[1640]: time="2025-12-16T03:53:31.510940108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:53:31.821517 containerd[1640]: time="2025-12-16T03:53:31.821368833Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:31.822995 containerd[1640]: time="2025-12-16T03:53:31.822939256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:53:31.823178 containerd[1640]: time="2025-12-16T03:53:31.823063976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:31.823567 kubelet[2985]: E1216 03:53:31.823504 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:53:31.824256 kubelet[2985]: E1216 03:53:31.823584 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:53:31.824975 kubelet[2985]: E1216 03:53:31.824879 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrplh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4px76_calico-system(68135f19-2992-417d-b99b-f5dddddbe1d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:31.826515 kubelet[2985]: E1216 03:53:31.826434 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:53:34.194852 kubelet[2985]: E1216 03:53:34.194776 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9867c46-mhs6d" podUID="cc879cee-dcea-4d9b-a071-304996197deb" Dec 16 03:53:36.192623 kubelet[2985]: E1216 03:53:36.192381 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" podUID="ada9a6ed-985f-4b6f-99d7-ef5caf58dd85" Dec 16 03:53:38.701168 systemd[1]: Started sshd@9-10.230.36.234:22-139.178.89.65:46610.service - OpenSSH per-connection server daemon (139.178.89.65:46610). Dec 16 03:53:38.720422 kernel: kauditd_printk_skb: 80 callbacks suppressed Dec 16 03:53:38.720524 kernel: audit: type=1130 audit(1765857218.700:752): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.36.234:22-139.178.89.65:46610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:53:38.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.36.234:22-139.178.89.65:46610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:53:39.570000 audit[5289]: USER_ACCT pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:39.577859 kernel: audit: type=1101 audit(1765857219.570:753): pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:39.577979 sshd[5289]: Accepted publickey for core from 139.178.89.65 port 46610 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:53:39.585077 kernel: audit: type=1103 audit(1765857219.573:754): pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:39.573000 audit[5289]: CRED_ACQ pid=5289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:39.580192 sshd-session[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:53:39.589884 kernel: audit: type=1006 audit(1765857219.573:755): pid=5289 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 03:53:39.573000 audit[5289]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb7bbc600 a2=3 a3=0 items=0 ppid=1 pid=5289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:39.603757 kernel: audit: type=1300 audit(1765857219.573:755): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb7bbc600 a2=3 a3=0 items=0 ppid=1 pid=5289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:39.573000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:53:39.609339 kernel: audit: type=1327 audit(1765857219.573:755): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:53:39.612154 systemd-logind[1614]: New session 13 of user core. Dec 16 03:53:39.618135 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 03:53:39.627000 audit[5289]: USER_START pid=5289 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:39.634750 kernel: audit: type=1105 audit(1765857219.627:756): pid=5289 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:39.638000 audit[5293]: CRED_ACQ pid=5293 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:39.643810 kernel: audit: type=1103 audit(1765857219.638:757): pid=5293 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:40.750977 sshd[5293]: Connection closed by 139.178.89.65 port 46610 Dec 16 03:53:40.751933 sshd-session[5289]: pam_unix(sshd:session): session closed for user core Dec 16 03:53:40.758000 audit[5289]: USER_END pid=5289 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:40.773131 kernel: audit: type=1106 audit(1765857220.758:758): pid=5289 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:40.777492 systemd-logind[1614]: Session 13 logged out. Waiting for processes to exit. Dec 16 03:53:40.779863 systemd[1]: sshd@9-10.230.36.234:22-139.178.89.65:46610.service: Deactivated successfully. Dec 16 03:53:40.787698 kernel: audit: type=1104 audit(1765857220.758:759): pid=5289 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:40.758000 audit[5289]: CRED_DISP pid=5289 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:40.785287 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 03:53:40.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.36.234:22-139.178.89.65:46610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:53:40.789886 systemd-logind[1614]: Removed session 13. Dec 16 03:53:41.195073 kubelet[2985]: E1216 03:53:41.194509 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" podUID="84a83236-0b05-4630-b32a-21eabf997946" Dec 16 03:53:42.192830 kubelet[2985]: E1216 03:53:42.192771 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ngtdd" podUID="d650a02a-f60f-480c-a3b2-4b144ff8489f" Dec 16 03:53:44.196278 kubelet[2985]: E1216 03:53:44.196188 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:53:45.909615 systemd[1]: Started sshd@10-10.230.36.234:22-139.178.89.65:51210.service - OpenSSH per-connection server daemon (139.178.89.65:51210). Dec 16 03:53:45.923746 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:53:45.923851 kernel: audit: type=1130 audit(1765857225.908:761): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.36.234:22-139.178.89.65:51210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:53:45.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.36.234:22-139.178.89.65:51210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:53:46.199508 kubelet[2985]: E1216 03:53:46.198736 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" podUID="050887c2-252c-4b73-aa88-7211b3356790" Dec 16 03:53:46.771766 kernel: audit: type=1101 audit(1765857226.759:762): pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:46.759000 audit[5312]: USER_ACCT pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:46.772299 sshd[5312]: Accepted publickey for core from 139.178.89.65 port 51210 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:53:46.777605 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:53:46.774000 audit[5312]: CRED_ACQ pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:46.787081 kernel: audit: type=1103 audit(1765857226.774:763): pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:46.794846 kernel: audit: type=1006 audit(1765857226.774:764): pid=5312 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 03:53:46.804846 kernel: audit: type=1300 audit(1765857226.774:764): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe71ea83d0 a2=3 a3=0 items=0 ppid=1 pid=5312 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:46.774000 audit[5312]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe71ea83d0 a2=3 a3=0 items=0 ppid=1 pid=5312 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:46.807263 systemd-logind[1614]: New session 14 of user core. Dec 16 03:53:46.774000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:53:46.813743 kernel: audit: type=1327 audit(1765857226.774:764): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:53:46.814587 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 03:53:46.821000 audit[5312]: USER_START pid=5312 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:46.830748 kernel: audit: type=1105 audit(1765857226.821:765): pid=5312 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:46.830000 audit[5323]: CRED_ACQ pid=5323 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:46.839748 kernel: audit: type=1103 audit(1765857226.830:766): pid=5323 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:47.424739 sshd[5323]: Connection closed by 139.178.89.65 port 51210 Dec 16 03:53:47.425623 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Dec 16 03:53:47.428000 audit[5312]: USER_END pid=5312 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:47.438527 systemd[1]: sshd@10-10.230.36.234:22-139.178.89.65:51210.service: Deactivated successfully. Dec 16 03:53:47.442532 kernel: audit: type=1106 audit(1765857227.428:767): pid=5312 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:47.428000 audit[5312]: CRED_DISP pid=5312 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:47.446192 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 03:53:47.450422 kernel: audit: type=1104 audit(1765857227.428:768): pid=5312 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:47.449804 systemd-logind[1614]: Session 14 logged out. Waiting for processes to exit. Dec 16 03:53:47.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.36.234:22-139.178.89.65:51210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:53:47.453033 systemd-logind[1614]: Removed session 14. Dec 16 03:53:48.195750 containerd[1640]: time="2025-12-16T03:53:48.194035877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:53:48.515833 containerd[1640]: time="2025-12-16T03:53:48.515048141Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:48.518306 containerd[1640]: time="2025-12-16T03:53:48.518245990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:53:48.518411 containerd[1640]: time="2025-12-16T03:53:48.518278965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:48.520387 kubelet[2985]: E1216 03:53:48.519097 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:53:48.521590 kubelet[2985]: E1216 03:53:48.521029 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:53:48.521590 kubelet[2985]: E1216 03:53:48.521460 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a60c07445d084b0f859deb3c2d6c59e4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tlhfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9867c46-mhs6d_calico-system(cc879cee-dcea-4d9b-a071-304996197deb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:48.524699 containerd[1640]: time="2025-12-16T03:53:48.524664317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:53:48.852003 containerd[1640]: time="2025-12-16T03:53:48.850901850Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:48.852684 containerd[1640]: time="2025-12-16T03:53:48.852636461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:53:48.852824 containerd[1640]: time="2025-12-16T03:53:48.852771821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:48.853848 kubelet[2985]: E1216 03:53:48.853019 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:53:48.853848 kubelet[2985]: E1216 03:53:48.853082 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:53:48.854360 kubelet[2985]: E1216 03:53:48.854153 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlhfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9867c46-mhs6d_calico-system(cc879cee-dcea-4d9b-a071-304996197deb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:48.856126 kubelet[2985]: E1216 03:53:48.856071 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9867c46-mhs6d" podUID="cc879cee-dcea-4d9b-a071-304996197deb" Dec 16 03:53:51.195011 containerd[1640]: time="2025-12-16T03:53:51.193674633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:53:51.527797 containerd[1640]: time="2025-12-16T03:53:51.527479795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:51.529023 containerd[1640]: time="2025-12-16T03:53:51.528982595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:53:51.529275 containerd[1640]: time="2025-12-16T03:53:51.529140010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:51.529813 kubelet[2985]: E1216 03:53:51.529723 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:53:51.530331 kubelet[2985]: E1216 03:53:51.529814 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:53:51.530331 kubelet[2985]: E1216 03:53:51.530056 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkjc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76cd974fc5-ks52b_calico-system(ada9a6ed-985f-4b6f-99d7-ef5caf58dd85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:51.531991 kubelet[2985]: E1216 03:53:51.531932 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" podUID="ada9a6ed-985f-4b6f-99d7-ef5caf58dd85" Dec 16 03:53:52.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.36.234:22-139.178.89.65:56936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:53:52.594163 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:53:52.594220 kernel: audit: type=1130 audit(1765857232.583:770): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.36.234:22-139.178.89.65:56936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:53:52.583904 systemd[1]: Started sshd@11-10.230.36.234:22-139.178.89.65:56936.service - OpenSSH per-connection server daemon (139.178.89.65:56936). Dec 16 03:53:53.194704 containerd[1640]: time="2025-12-16T03:53:53.194640696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:53:53.403121 kernel: audit: type=1101 audit(1765857233.389:771): pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:53.389000 audit[5336]: USER_ACCT pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:53.403458 sshd[5336]: Accepted publickey for core from 139.178.89.65 port 56936 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:53:53.408548 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:53:53.406000 audit[5336]: CRED_ACQ pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:53.416767 kernel: audit: type=1103 audit(1765857233.406:772): pid=5336 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:53.423497 kernel: audit: type=1006 audit(1765857233.406:773): pid=5336 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 03:53:53.423591 kernel: audit: type=1300 audit(1765857233.406:773): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca674f670 a2=3 a3=0 items=0 ppid=1 pid=5336 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:53.406000 audit[5336]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca674f670 a2=3 a3=0 items=0 ppid=1 pid=5336 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:53.430962 kernel: audit: type=1327 audit(1765857233.406:773): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:53:53.406000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:53:53.428794 systemd-logind[1614]: New session 15 of user core. Dec 16 03:53:53.437939 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 03:53:53.444000 audit[5336]: USER_START pid=5336 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:53.452764 kernel: audit: type=1105 audit(1765857233.444:774): pid=5336 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:53.456000 audit[5340]: CRED_ACQ pid=5340 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:53.462765 kernel: audit: type=1103 audit(1765857233.456:775): pid=5340 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:53.514760 containerd[1640]: time="2025-12-16T03:53:53.514070738Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:53.516571 containerd[1640]: time="2025-12-16T03:53:53.516494063Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:53:53.516571 containerd[1640]: time="2025-12-16T03:53:53.516538920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:53.517269 kubelet[2985]: E1216 03:53:53.517189 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:53:53.517751 kubelet[2985]: E1216 03:53:53.517295 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:53:53.518734 kubelet[2985]: E1216 03:53:53.517892 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nstgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ngtdd_calico-system(d650a02a-f60f-480c-a3b2-4b144ff8489f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:53.519143 kubelet[2985]: E1216 03:53:53.519102 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ngtdd" podUID="d650a02a-f60f-480c-a3b2-4b144ff8489f" Dec 16 03:53:53.935938 sshd[5340]: Connection closed by 139.178.89.65 port 56936 Dec 16 03:53:53.937096 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Dec 16 03:53:53.940000 audit[5336]: USER_END pid=5336 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:53.947757 kernel: audit: type=1106 audit(1765857233.940:776): pid=5336 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:53.951011 systemd[1]: sshd@11-10.230.36.234:22-139.178.89.65:56936.service: Deactivated successfully. Dec 16 03:53:53.947000 audit[5336]: CRED_DISP pid=5336 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:53.957808 kernel: audit: type=1104 audit(1765857233.947:777): pid=5336 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:53.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.36.234:22-139.178.89.65:56936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:53:53.963191 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 03:53:53.969358 systemd-logind[1614]: Session 15 logged out. Waiting for processes to exit. Dec 16 03:53:53.972900 systemd-logind[1614]: Removed session 15. Dec 16 03:53:54.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.36.234:22-139.178.89.65:56940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:53:54.096102 systemd[1]: Started sshd@12-10.230.36.234:22-139.178.89.65:56940.service - OpenSSH per-connection server daemon (139.178.89.65:56940). Dec 16 03:53:54.900582 sshd[5353]: Accepted publickey for core from 139.178.89.65 port 56940 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:53:54.899000 audit[5353]: USER_ACCT pid=5353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:54.901000 audit[5353]: CRED_ACQ pid=5353 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:54.902000 audit[5353]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec10a8b70 a2=3 a3=0 items=0 ppid=1 pid=5353 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:54.902000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:53:54.904204 sshd-session[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:53:54.916280 systemd-logind[1614]: New session 16 of user core. Dec 16 03:53:54.921976 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 03:53:54.926000 audit[5353]: USER_START pid=5353 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:54.930000 audit[5357]: CRED_ACQ pid=5357 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:55.193632 containerd[1640]: time="2025-12-16T03:53:55.193431704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:53:55.512450 containerd[1640]: time="2025-12-16T03:53:55.512254257Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:55.513990 containerd[1640]: time="2025-12-16T03:53:55.513934582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:53:55.514181 containerd[1640]: time="2025-12-16T03:53:55.513954993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:55.514361 kubelet[2985]: E1216 03:53:55.514280 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:53:55.514361 kubelet[2985]: E1216 03:53:55.514356 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:53:55.515425 kubelet[2985]: E1216 03:53:55.514596 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7l2pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64754b8886-wksvf_calico-apiserver(84a83236-0b05-4630-b32a-21eabf997946): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:55.516280 kubelet[2985]: E1216 03:53:55.516231 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" podUID="84a83236-0b05-4630-b32a-21eabf997946" Dec 16 03:53:55.626604 sshd[5357]: Connection closed by 139.178.89.65 port 56940 Dec 16 03:53:55.628909 sshd-session[5353]: pam_unix(sshd:session): session closed for user core Dec 16 03:53:55.635000 audit[5353]: USER_END pid=5353 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:55.635000 audit[5353]: CRED_DISP pid=5353 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:55.641315 systemd[1]: sshd@12-10.230.36.234:22-139.178.89.65:56940.service: Deactivated successfully. Dec 16 03:53:55.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.36.234:22-139.178.89.65:56940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:53:55.648630 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 03:53:55.655223 systemd-logind[1614]: Session 16 logged out. Waiting for processes to exit. Dec 16 03:53:55.657598 systemd-logind[1614]: Removed session 16. Dec 16 03:53:55.790170 systemd[1]: Started sshd@13-10.230.36.234:22-139.178.89.65:56956.service - OpenSSH per-connection server daemon (139.178.89.65:56956). Dec 16 03:53:55.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.36.234:22-139.178.89.65:56956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:53:56.644000 audit[5367]: USER_ACCT pid=5367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:56.647161 sshd[5367]: Accepted publickey for core from 139.178.89.65 port 56956 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:53:56.648000 audit[5367]: CRED_ACQ pid=5367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:56.648000 audit[5367]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1ee186b0 a2=3 a3=0 items=0 ppid=1 pid=5367 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:53:56.648000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:53:56.650860 sshd-session[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:53:56.662455 systemd-logind[1614]: New session 17 of user core. Dec 16 03:53:56.667884 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 03:53:56.674000 audit[5367]: USER_START pid=5367 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:56.678000 audit[5373]: CRED_ACQ pid=5373 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:57.233275 sshd[5373]: Connection closed by 139.178.89.65 port 56956 Dec 16 03:53:57.236310 sshd-session[5367]: pam_unix(sshd:session): session closed for user core Dec 16 03:53:57.238000 audit[5367]: USER_END pid=5367 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:57.238000 audit[5367]: CRED_DISP pid=5367 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:53:57.245034 systemd[1]: sshd@13-10.230.36.234:22-139.178.89.65:56956.service: Deactivated successfully. Dec 16 03:53:57.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.36.234:22-139.178.89.65:56956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:53:57.248797 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 03:53:57.250634 systemd-logind[1614]: Session 17 logged out. Waiting for processes to exit. Dec 16 03:53:57.253093 systemd-logind[1614]: Removed session 17. Dec 16 03:53:59.194596 containerd[1640]: time="2025-12-16T03:53:59.192921119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:53:59.505493 containerd[1640]: time="2025-12-16T03:53:59.505118873Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:59.508448 containerd[1640]: time="2025-12-16T03:53:59.508371501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:53:59.508802 containerd[1640]: time="2025-12-16T03:53:59.508434939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:59.510627 kubelet[2985]: E1216 03:53:59.509110 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:53:59.510627 kubelet[2985]: E1216 03:53:59.509174 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:53:59.510627 kubelet[2985]: E1216 03:53:59.509382 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrplh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4px76_calico-system(68135f19-2992-417d-b99b-f5dddddbe1d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:59.516232 containerd[1640]: time="2025-12-16T03:53:59.515852416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:53:59.834611 containerd[1640]: time="2025-12-16T03:53:59.834110610Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:53:59.839658 containerd[1640]: time="2025-12-16T03:53:59.839548089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:53:59.840841 containerd[1640]: time="2025-12-16T03:53:59.839566682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:53:59.840914 kubelet[2985]: E1216 03:53:59.840231 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:53:59.840914 kubelet[2985]: E1216 03:53:59.840301 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:53:59.840914 kubelet[2985]: E1216 03:53:59.840528 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrplh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4px76_calico-system(68135f19-2992-417d-b99b-f5dddddbe1d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:53:59.841876 kubelet[2985]: E1216 03:53:59.841817 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:54:01.194779 containerd[1640]: time="2025-12-16T03:54:01.194563947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:54:01.505002 containerd[1640]: time="2025-12-16T03:54:01.504816876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:54:01.506006 containerd[1640]: time="2025-12-16T03:54:01.505961801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:54:01.506095 containerd[1640]: time="2025-12-16T03:54:01.506061347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:54:01.506361 kubelet[2985]: E1216 03:54:01.506297 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:54:01.506851 kubelet[2985]: E1216 03:54:01.506382 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:54:01.518061 kubelet[2985]: E1216 03:54:01.506579 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-psz9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64754b8886-qt5xx_calico-apiserver(050887c2-252c-4b73-aa88-7211b3356790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:54:01.519157 kubelet[2985]: E1216 03:54:01.519095 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" podUID="050887c2-252c-4b73-aa88-7211b3356790" Dec 16 03:54:02.193679 kubelet[2985]: E1216 03:54:02.193601 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" podUID="ada9a6ed-985f-4b6f-99d7-ef5caf58dd85" Dec 16 03:54:02.195248 kubelet[2985]: E1216 03:54:02.195194 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9867c46-mhs6d" podUID="cc879cee-dcea-4d9b-a071-304996197deb" Dec 16 03:54:02.409765 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 03:54:02.409936 kernel: audit: type=1130 audit(1765857242.397:797): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.36.234:22-139.178.89.65:44526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:02.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.36.234:22-139.178.89.65:44526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:02.399193 systemd[1]: Started sshd@14-10.230.36.234:22-139.178.89.65:44526.service - OpenSSH per-connection server daemon (139.178.89.65:44526). Dec 16 03:54:03.211758 kernel: audit: type=1101 audit(1765857243.204:798): pid=5392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:03.204000 audit[5392]: USER_ACCT pid=5392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:03.208857 sshd-session[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:54:03.212467 sshd[5392]: Accepted publickey for core from 139.178.89.65 port 44526 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:54:03.205000 audit[5392]: CRED_ACQ pid=5392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:03.217748 kernel: audit: type=1103 audit(1765857243.205:799): pid=5392 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:03.224762 kernel: audit: type=1006 audit(1765857243.205:800): pid=5392 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 03:54:03.205000 audit[5392]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd021299f0 a2=3 a3=0 items=0 ppid=1 pid=5392 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:03.232750 kernel: audit: type=1300 audit(1765857243.205:800): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd021299f0 a2=3 a3=0 items=0 ppid=1 pid=5392 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:03.236504 systemd-logind[1614]: New session 18 of user core. Dec 16 03:54:03.205000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:54:03.243762 kernel: audit: type=1327 audit(1765857243.205:800): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:54:03.244087 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 03:54:03.249000 audit[5392]: USER_START pid=5392 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:03.257816 kernel: audit: type=1105 audit(1765857243.249:801): pid=5392 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:03.258000 audit[5396]: CRED_ACQ pid=5396 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:03.272743 kernel: audit: type=1103 audit(1765857243.258:802): pid=5396 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:03.821432 sshd[5396]: Connection closed by 139.178.89.65 port 44526 Dec 16 03:54:03.822882 sshd-session[5392]: pam_unix(sshd:session): session closed for user core Dec 16 03:54:03.828000 audit[5392]: USER_END pid=5392 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:03.842748 kernel: audit: type=1106 audit(1765857243.828:803): pid=5392 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:03.829000 audit[5392]: CRED_DISP pid=5392 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:03.848470 systemd[1]: sshd@14-10.230.36.234:22-139.178.89.65:44526.service: Deactivated successfully. Dec 16 03:54:03.851763 kernel: audit: type=1104 audit(1765857243.829:804): pid=5392 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:03.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.36.234:22-139.178.89.65:44526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:03.856757 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 03:54:03.863510 systemd-logind[1614]: Session 18 logged out. Waiting for processes to exit. Dec 16 03:54:03.866904 systemd-logind[1614]: Removed session 18. Dec 16 03:54:05.193658 kubelet[2985]: E1216 03:54:05.193103 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ngtdd" podUID="d650a02a-f60f-480c-a3b2-4b144ff8489f" Dec 16 03:54:06.192975 kubelet[2985]: E1216 03:54:06.192917 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" podUID="84a83236-0b05-4630-b32a-21eabf997946" Dec 16 03:54:08.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.36.234:22-139.178.89.65:44542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:08.988868 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:54:08.988991 kernel: audit: type=1130 audit(1765857248.983:806): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.36.234:22-139.178.89.65:44542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:08.983558 systemd[1]: Started sshd@15-10.230.36.234:22-139.178.89.65:44542.service - OpenSSH per-connection server daemon (139.178.89.65:44542). Dec 16 03:54:09.845000 audit[5432]: USER_ACCT pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:09.855181 sshd[5432]: Accepted publickey for core from 139.178.89.65 port 44542 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:54:09.855923 kernel: audit: type=1101 audit(1765857249.845:807): pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:09.855000 audit[5432]: CRED_ACQ pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:09.864778 kernel: audit: type=1103 audit(1765857249.855:808): pid=5432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:09.864906 kernel: audit: type=1006 audit(1765857249.855:809): pid=5432 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 03:54:09.862779 sshd-session[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:54:09.855000 audit[5432]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7c22d9f0 a2=3 a3=0 items=0 ppid=1 pid=5432 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:09.855000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:54:09.874137 kernel: audit: type=1300 audit(1765857249.855:809): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7c22d9f0 a2=3 a3=0 items=0 ppid=1 pid=5432 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:09.874239 kernel: audit: type=1327 audit(1765857249.855:809): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:54:09.885882 systemd-logind[1614]: New session 19 of user core. Dec 16 03:54:09.895144 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 03:54:09.900000 audit[5432]: USER_START pid=5432 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:09.908937 kernel: audit: type=1105 audit(1765857249.900:810): pid=5432 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:09.908000 audit[5437]: CRED_ACQ pid=5437 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:09.914745 kernel: audit: type=1103 audit(1765857249.908:811): pid=5437 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:10.198239 kubelet[2985]: E1216 03:54:10.198060 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:54:10.534836 sshd[5437]: Connection closed by 139.178.89.65 port 44542 Dec 16 03:54:10.535265 sshd-session[5432]: pam_unix(sshd:session): session closed for user core Dec 16 03:54:10.538000 audit[5432]: USER_END pid=5432 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:10.548748 kernel: audit: type=1106 audit(1765857250.538:812): pid=5432 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:10.549234 systemd[1]: sshd@15-10.230.36.234:22-139.178.89.65:44542.service: Deactivated successfully. Dec 16 03:54:10.554556 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 03:54:10.538000 audit[5432]: CRED_DISP pid=5432 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:10.564832 kernel: audit: type=1104 audit(1765857250.538:813): pid=5432 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:10.565140 systemd-logind[1614]: Session 19 logged out. Waiting for processes to exit. Dec 16 03:54:10.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.36.234:22-139.178.89.65:44542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:10.570573 systemd-logind[1614]: Removed session 19. Dec 16 03:54:13.196951 kubelet[2985]: E1216 03:54:13.196863 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" podUID="ada9a6ed-985f-4b6f-99d7-ef5caf58dd85" Dec 16 03:54:13.198176 kubelet[2985]: E1216 03:54:13.198109 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9867c46-mhs6d" podUID="cc879cee-dcea-4d9b-a071-304996197deb" Dec 16 03:54:15.698801 systemd[1]: Started sshd@16-10.230.36.234:22-139.178.89.65:38554.service - OpenSSH per-connection server daemon (139.178.89.65:38554). Dec 16 03:54:15.712160 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:54:15.712252 kernel: audit: type=1130 audit(1765857255.698:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.36.234:22-139.178.89.65:38554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:15.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.36.234:22-139.178.89.65:38554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:16.201751 kubelet[2985]: E1216 03:54:16.201117 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ngtdd" podUID="d650a02a-f60f-480c-a3b2-4b144ff8489f" Dec 16 03:54:16.201751 kubelet[2985]: E1216 03:54:16.201580 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" podUID="050887c2-252c-4b73-aa88-7211b3356790" Dec 16 03:54:16.502000 audit[5451]: USER_ACCT pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:16.507238 sshd[5451]: Accepted publickey for core from 139.178.89.65 port 38554 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:54:16.511429 kernel: audit: type=1101 audit(1765857256.502:816): pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:16.511325 sshd-session[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:54:16.509000 audit[5451]: CRED_ACQ pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:16.520818 kernel: audit: type=1103 audit(1765857256.509:817): pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:16.509000 audit[5451]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd17d5cfc0 a2=3 a3=0 items=0 ppid=1 pid=5451 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:16.526491 kernel: audit: type=1006 audit(1765857256.509:818): pid=5451 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 16 03:54:16.526574 kernel: audit: type=1300 audit(1765857256.509:818): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd17d5cfc0 a2=3 a3=0 items=0 ppid=1 pid=5451 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:16.509000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:54:16.533757 kernel: audit: type=1327 audit(1765857256.509:818): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:54:16.540210 systemd-logind[1614]: New session 20 of user core. Dec 16 03:54:16.554100 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 03:54:16.563000 audit[5451]: USER_START pid=5451 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:16.570770 kernel: audit: type=1105 audit(1765857256.563:819): pid=5451 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:16.572000 audit[5455]: CRED_ACQ pid=5455 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:16.578747 kernel: audit: type=1103 audit(1765857256.572:820): pid=5455 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:17.103825 sshd[5455]: Connection closed by 139.178.89.65 port 38554 Dec 16 03:54:17.105412 sshd-session[5451]: pam_unix(sshd:session): session closed for user core Dec 16 03:54:17.108000 audit[5451]: USER_END pid=5451 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:17.108000 audit[5451]: CRED_DISP pid=5451 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:17.117809 kernel: audit: type=1106 audit(1765857257.108:821): pid=5451 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:17.117929 kernel: audit: type=1104 audit(1765857257.108:822): pid=5451 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:17.133807 systemd-logind[1614]: Session 20 logged out. Waiting for processes to exit. Dec 16 03:54:17.134284 systemd[1]: sshd@16-10.230.36.234:22-139.178.89.65:38554.service: Deactivated successfully. Dec 16 03:54:17.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.36.234:22-139.178.89.65:38554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:17.138210 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 03:54:17.142906 systemd-logind[1614]: Removed session 20. Dec 16 03:54:19.194314 kubelet[2985]: E1216 03:54:19.194156 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" podUID="84a83236-0b05-4630-b32a-21eabf997946" Dec 16 03:54:22.256432 systemd[1]: Started sshd@17-10.230.36.234:22-139.178.89.65:35490.service - OpenSSH per-connection server daemon (139.178.89.65:35490). Dec 16 03:54:22.260540 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:54:22.260665 kernel: audit: type=1130 audit(1765857262.255:824): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.36.234:22-139.178.89.65:35490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:22.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.36.234:22-139.178.89.65:35490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:23.099230 sshd[5467]: Accepted publickey for core from 139.178.89.65 port 35490 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:54:23.098000 audit[5467]: USER_ACCT pid=5467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:23.107056 sshd-session[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:54:23.109817 kernel: audit: type=1101 audit(1765857263.098:825): pid=5467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:23.100000 audit[5467]: CRED_ACQ pid=5467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:23.116750 kernel: audit: type=1103 audit(1765857263.100:826): pid=5467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:23.116851 kernel: audit: type=1006 audit(1765857263.100:827): pid=5467 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 16 03:54:23.100000 audit[5467]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7b4e25a0 a2=3 a3=0 items=0 ppid=1 pid=5467 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:23.100000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:54:23.129809 systemd-logind[1614]: New session 21 of user core. Dec 16 03:54:23.131909 kernel: audit: type=1300 audit(1765857263.100:827): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7b4e25a0 a2=3 a3=0 items=0 ppid=1 pid=5467 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:23.131965 kernel: audit: type=1327 audit(1765857263.100:827): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:54:23.135003 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 03:54:23.143000 audit[5467]: USER_START pid=5467 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:23.152911 kernel: audit: type=1105 audit(1765857263.143:828): pid=5467 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:23.148000 audit[5471]: CRED_ACQ pid=5471 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:23.158770 kernel: audit: type=1103 audit(1765857263.148:829): pid=5471 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:23.194258 kubelet[2985]: E1216 03:54:23.194191 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:54:23.724924 sshd[5471]: Connection closed by 139.178.89.65 port 35490 Dec 16 03:54:23.727988 sshd-session[5467]: pam_unix(sshd:session): session closed for user core Dec 16 03:54:23.728000 audit[5467]: USER_END pid=5467 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:23.741804 kernel: audit: type=1106 audit(1765857263.728:830): pid=5467 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:23.738602 systemd[1]: sshd@17-10.230.36.234:22-139.178.89.65:35490.service: Deactivated successfully. Dec 16 03:54:23.744683 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 03:54:23.730000 audit[5467]: CRED_DISP pid=5467 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:23.752505 systemd-logind[1614]: Session 21 logged out. Waiting for processes to exit. Dec 16 03:54:23.753053 kernel: audit: type=1104 audit(1765857263.730:831): pid=5467 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:23.754468 systemd-logind[1614]: Removed session 21. Dec 16 03:54:23.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.36.234:22-139.178.89.65:35490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:23.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.36.234:22-139.178.89.65:35492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:23.885833 systemd[1]: Started sshd@18-10.230.36.234:22-139.178.89.65:35492.service - OpenSSH per-connection server daemon (139.178.89.65:35492). Dec 16 03:54:24.692000 audit[5483]: USER_ACCT pid=5483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:24.695039 sshd[5483]: Accepted publickey for core from 139.178.89.65 port 35492 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:54:24.695000 audit[5483]: CRED_ACQ pid=5483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:24.695000 audit[5483]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd6723510 a2=3 a3=0 items=0 ppid=1 pid=5483 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:24.695000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:54:24.698883 sshd-session[5483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:54:24.715554 systemd-logind[1614]: New session 22 of user core. Dec 16 03:54:24.719960 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 03:54:24.726000 audit[5483]: USER_START pid=5483 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:24.730000 audit[5487]: CRED_ACQ pid=5487 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:25.684101 sshd[5487]: Connection closed by 139.178.89.65 port 35492 Dec 16 03:54:25.687514 sshd-session[5483]: pam_unix(sshd:session): session closed for user core Dec 16 03:54:25.692000 audit[5483]: USER_END pid=5483 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:25.692000 audit[5483]: CRED_DISP pid=5483 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:25.706829 systemd-logind[1614]: Session 22 logged out. Waiting for processes to exit. Dec 16 03:54:25.707503 systemd[1]: sshd@18-10.230.36.234:22-139.178.89.65:35492.service: Deactivated successfully. Dec 16 03:54:25.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.36.234:22-139.178.89.65:35492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:25.712237 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 03:54:25.716230 systemd-logind[1614]: Removed session 22. Dec 16 03:54:25.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.36.234:22-139.178.89.65:35498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:25.844430 systemd[1]: Started sshd@19-10.230.36.234:22-139.178.89.65:35498.service - OpenSSH per-connection server daemon (139.178.89.65:35498). Dec 16 03:54:26.196298 kubelet[2985]: E1216 03:54:26.195464 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" podUID="ada9a6ed-985f-4b6f-99d7-ef5caf58dd85" Dec 16 03:54:26.691000 audit[5497]: USER_ACCT pid=5497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:26.693331 sshd[5497]: Accepted publickey for core from 139.178.89.65 port 35498 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:54:26.692000 audit[5497]: CRED_ACQ pid=5497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:26.693000 audit[5497]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff109ea6e0 a2=3 a3=0 items=0 ppid=1 pid=5497 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:26.693000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:54:26.695059 sshd-session[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:54:26.705738 systemd-logind[1614]: New session 23 of user core. Dec 16 03:54:26.714001 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 03:54:26.718000 audit[5497]: USER_START pid=5497 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:26.721000 audit[5507]: CRED_ACQ pid=5507 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:27.195355 kubelet[2985]: E1216 03:54:27.194705 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ngtdd" podUID="d650a02a-f60f-480c-a3b2-4b144ff8489f" Dec 16 03:54:27.195355 kubelet[2985]: E1216 03:54:27.195263 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9867c46-mhs6d" podUID="cc879cee-dcea-4d9b-a071-304996197deb" Dec 16 03:54:28.174000 audit[5517]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:54:28.182669 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 16 03:54:28.182802 kernel: audit: type=1325 audit(1765857268.174:848): table=filter:148 family=2 entries=26 op=nft_register_rule pid=5517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:54:28.197902 kubelet[2985]: E1216 03:54:28.197847 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" podUID="050887c2-252c-4b73-aa88-7211b3356790" Dec 16 03:54:28.174000 audit[5517]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff6b183520 a2=0 a3=7fff6b18350c items=0 ppid=3138 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:28.227764 kernel: audit: type=1300 audit(1765857268.174:848): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff6b183520 a2=0 a3=7fff6b18350c items=0 ppid=3138 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:28.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:54:28.232000 audit[5517]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:54:28.234704 kernel: audit: type=1327 audit(1765857268.174:848): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:54:28.234811 kernel: audit: type=1325 audit(1765857268.232:849): table=nat:149 family=2 entries=20 op=nft_register_rule pid=5517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:54:28.232000 audit[5517]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff6b183520 a2=0 a3=0 items=0 ppid=3138 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:28.244442 kernel: audit: type=1300 audit(1765857268.232:849): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff6b183520 a2=0 a3=0 items=0 ppid=3138 pid=5517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:28.232000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:54:28.249802 kernel: audit: type=1327 audit(1765857268.232:849): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:54:28.252382 sshd[5507]: Connection closed by 139.178.89.65 port 35498 Dec 16 03:54:28.254655 sshd-session[5497]: pam_unix(sshd:session): session closed for user core Dec 16 03:54:28.257000 audit[5497]: USER_END pid=5497 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:28.264766 kernel: audit: type=1106 audit(1765857268.257:850): pid=5497 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:28.264000 audit[5519]: NETFILTER_CFG table=filter:150 family=2 entries=38 op=nft_register_rule pid=5519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:54:28.270815 kernel: audit: type=1325 audit(1765857268.264:851): table=filter:150 family=2 entries=38 op=nft_register_rule pid=5519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:54:28.270348 systemd[1]: sshd@19-10.230.36.234:22-139.178.89.65:35498.service: Deactivated successfully. Dec 16 03:54:28.264000 audit[5519]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff8e039050 a2=0 a3=7fff8e03903c items=0 ppid=3138 pid=5519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:28.283811 kernel: audit: type=1300 audit(1765857268.264:851): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff8e039050 a2=0 a3=7fff8e03903c items=0 ppid=3138 pid=5519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:28.283420 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 03:54:28.264000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:54:28.289289 kernel: audit: type=1327 audit(1765857268.264:851): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:54:28.288736 systemd-logind[1614]: Session 23 logged out. Waiting for processes to exit. Dec 16 03:54:28.265000 audit[5497]: CRED_DISP pid=5497 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:28.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.36.234:22-139.178.89.65:35498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:28.272000 audit[5519]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=5519 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:54:28.272000 audit[5519]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff8e039050 a2=0 a3=0 items=0 ppid=3138 pid=5519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:28.272000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:54:28.293561 systemd-logind[1614]: Removed session 23. Dec 16 03:54:28.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.36.234:22-139.178.89.65:35504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:28.413169 systemd[1]: Started sshd@20-10.230.36.234:22-139.178.89.65:35504.service - OpenSSH per-connection server daemon (139.178.89.65:35504). Dec 16 03:54:29.224000 audit[5524]: USER_ACCT pid=5524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:29.225421 sshd[5524]: Accepted publickey for core from 139.178.89.65 port 35504 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:54:29.227000 audit[5524]: CRED_ACQ pid=5524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:29.227000 audit[5524]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3e6cfc30 a2=3 a3=0 items=0 ppid=1 pid=5524 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:29.227000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:54:29.229721 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:54:29.244694 systemd-logind[1614]: New session 24 of user core. Dec 16 03:54:29.249902 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 03:54:29.256000 audit[5524]: USER_START pid=5524 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:29.260000 audit[5529]: CRED_ACQ pid=5529 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:30.085493 sshd[5529]: Connection closed by 139.178.89.65 port 35504 Dec 16 03:54:30.086433 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Dec 16 03:54:30.090000 audit[5524]: USER_END pid=5524 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:30.090000 audit[5524]: CRED_DISP pid=5524 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:30.095030 systemd[1]: sshd@20-10.230.36.234:22-139.178.89.65:35504.service: Deactivated successfully. Dec 16 03:54:30.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.36.234:22-139.178.89.65:35504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:30.099929 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 03:54:30.101416 systemd-logind[1614]: Session 24 logged out. Waiting for processes to exit. Dec 16 03:54:30.103900 systemd-logind[1614]: Removed session 24. Dec 16 03:54:30.243000 systemd[1]: Started sshd@21-10.230.36.234:22-139.178.89.65:35508.service - OpenSSH per-connection server daemon (139.178.89.65:35508). Dec 16 03:54:30.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.36.234:22-139.178.89.65:35508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:31.266000 audit[5540]: USER_ACCT pid=5540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:31.269338 sshd[5540]: Accepted publickey for core from 139.178.89.65 port 35508 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:54:31.269000 audit[5540]: CRED_ACQ pid=5540 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:31.269000 audit[5540]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe936f3110 a2=3 a3=0 items=0 ppid=1 pid=5540 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:31.269000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:54:31.272542 sshd-session[5540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:54:31.287009 systemd-logind[1614]: New session 25 of user core. Dec 16 03:54:31.294006 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 03:54:31.301000 audit[5540]: USER_START pid=5540 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:31.305000 audit[5544]: CRED_ACQ pid=5544 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:31.852245 sshd[5544]: Connection closed by 139.178.89.65 port 35508 Dec 16 03:54:31.853588 sshd-session[5540]: pam_unix(sshd:session): session closed for user core Dec 16 03:54:31.855000 audit[5540]: USER_END pid=5540 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:31.855000 audit[5540]: CRED_DISP pid=5540 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:31.859504 systemd[1]: sshd@21-10.230.36.234:22-139.178.89.65:35508.service: Deactivated successfully. Dec 16 03:54:31.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.36.234:22-139.178.89.65:35508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:31.863515 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 03:54:31.867046 systemd-logind[1614]: Session 25 logged out. Waiting for processes to exit. Dec 16 03:54:31.869160 systemd-logind[1614]: Removed session 25. Dec 16 03:54:32.195645 kubelet[2985]: E1216 03:54:32.194869 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" podUID="84a83236-0b05-4630-b32a-21eabf997946" Dec 16 03:54:34.194609 kubelet[2985]: E1216 03:54:34.194481 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3" Dec 16 03:54:37.011832 systemd[1]: Started sshd@22-10.230.36.234:22-139.178.89.65:46286.service - OpenSSH per-connection server daemon (139.178.89.65:46286). Dec 16 03:54:37.018339 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 16 03:54:37.018441 kernel: audit: type=1130 audit(1765857277.010:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.36.234:22-139.178.89.65:46286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:37.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.36.234:22-139.178.89.65:46286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:37.149000 audit[5586]: NETFILTER_CFG table=filter:152 family=2 entries=26 op=nft_register_rule pid=5586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:54:37.154787 kernel: audit: type=1325 audit(1765857277.149:874): table=filter:152 family=2 entries=26 op=nft_register_rule pid=5586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:54:37.154873 kernel: audit: type=1300 audit(1765857277.149:874): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffed9774590 a2=0 a3=7ffed977457c items=0 ppid=3138 pid=5586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:37.149000 audit[5586]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffed9774590 a2=0 a3=7ffed977457c items=0 ppid=3138 pid=5586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:37.149000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:54:37.165809 kernel: audit: type=1327 audit(1765857277.149:874): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:54:37.168000 audit[5586]: NETFILTER_CFG table=nat:153 family=2 entries=104 op=nft_register_chain pid=5586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:54:37.174757 kernel: audit: type=1325 audit(1765857277.168:875): table=nat:153 family=2 entries=104 op=nft_register_chain pid=5586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:54:37.168000 audit[5586]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffed9774590 a2=0 a3=7ffed977457c items=0 ppid=3138 pid=5586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:37.181755 kernel: audit: type=1300 audit(1765857277.168:875): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffed9774590 a2=0 a3=7ffed977457c items=0 ppid=3138 pid=5586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:37.168000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:54:37.186767 kernel: audit: type=1327 audit(1765857277.168:875): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:54:37.825000 audit[5582]: USER_ACCT pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:37.831746 sshd[5582]: Accepted publickey for core from 139.178.89.65 port 46286 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:54:37.835445 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:54:37.833000 audit[5582]: CRED_ACQ pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:37.837220 kernel: audit: type=1101 audit(1765857277.825:876): pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:37.837312 kernel: audit: type=1103 audit(1765857277.833:877): pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:37.833000 audit[5582]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5274b0a0 a2=3 a3=0 items=0 ppid=1 pid=5582 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:37.846539 kernel: audit: type=1006 audit(1765857277.833:878): pid=5582 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 03:54:37.833000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:54:37.852408 systemd-logind[1614]: New session 26 of user core. Dec 16 03:54:37.858028 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 03:54:37.863000 audit[5582]: USER_START pid=5582 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:37.866000 audit[5588]: CRED_ACQ pid=5588 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:38.478354 sshd[5588]: Connection closed by 139.178.89.65 port 46286 Dec 16 03:54:38.479228 sshd-session[5582]: pam_unix(sshd:session): session closed for user core Dec 16 03:54:38.480000 audit[5582]: USER_END pid=5582 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:38.480000 audit[5582]: CRED_DISP pid=5582 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:38.487218 systemd[1]: sshd@22-10.230.36.234:22-139.178.89.65:46286.service: Deactivated successfully. Dec 16 03:54:38.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.36.234:22-139.178.89.65:46286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:38.492551 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 03:54:38.495496 systemd-logind[1614]: Session 26 logged out. Waiting for processes to exit. Dec 16 03:54:38.498569 systemd-logind[1614]: Removed session 26. Dec 16 03:54:39.220906 containerd[1640]: time="2025-12-16T03:54:39.196692986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:54:39.565384 containerd[1640]: time="2025-12-16T03:54:39.565092778Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:54:39.566644 containerd[1640]: time="2025-12-16T03:54:39.566590751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:54:39.566878 containerd[1640]: time="2025-12-16T03:54:39.566605582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:54:39.567256 kubelet[2985]: E1216 03:54:39.567182 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:54:39.568265 kubelet[2985]: E1216 03:54:39.567817 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:54:39.568265 kubelet[2985]: E1216 03:54:39.568165 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a60c07445d084b0f859deb3c2d6c59e4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tlhfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9867c46-mhs6d_calico-system(cc879cee-dcea-4d9b-a071-304996197deb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:54:39.569803 containerd[1640]: time="2025-12-16T03:54:39.569059051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:54:39.877449 containerd[1640]: time="2025-12-16T03:54:39.876490218Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:54:39.878693 containerd[1640]: time="2025-12-16T03:54:39.878588387Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:54:39.878693 containerd[1640]: time="2025-12-16T03:54:39.878656042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:54:39.879324 kubelet[2985]: E1216 03:54:39.879178 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:54:39.879324 kubelet[2985]: E1216 03:54:39.879258 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:54:39.880547 containerd[1640]: time="2025-12-16T03:54:39.880104267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:54:39.883102 kubelet[2985]: E1216 03:54:39.882988 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkjc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76cd974fc5-ks52b_calico-system(ada9a6ed-985f-4b6f-99d7-ef5caf58dd85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:54:39.884282 kubelet[2985]: E1216 03:54:39.884197 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76cd974fc5-ks52b" podUID="ada9a6ed-985f-4b6f-99d7-ef5caf58dd85" Dec 16 03:54:40.193564 containerd[1640]: time="2025-12-16T03:54:40.193039624Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:54:40.197973 containerd[1640]: time="2025-12-16T03:54:40.195584147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:54:40.197973 containerd[1640]: time="2025-12-16T03:54:40.195668261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:54:40.198209 kubelet[2985]: E1216 03:54:40.198136 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:54:40.198298 kubelet[2985]: E1216 03:54:40.198227 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:54:40.198540 kubelet[2985]: E1216 03:54:40.198470 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlhfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9867c46-mhs6d_calico-system(cc879cee-dcea-4d9b-a071-304996197deb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:54:40.200266 kubelet[2985]: E1216 03:54:40.200211 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9867c46-mhs6d" podUID="cc879cee-dcea-4d9b-a071-304996197deb" Dec 16 03:54:42.196144 containerd[1640]: time="2025-12-16T03:54:42.196003422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:54:42.512225 containerd[1640]: time="2025-12-16T03:54:42.512037703Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:54:42.513239 containerd[1640]: time="2025-12-16T03:54:42.513174946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:54:42.514738 containerd[1640]: time="2025-12-16T03:54:42.513220900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:54:42.514842 kubelet[2985]: E1216 03:54:42.513485 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:54:42.514842 kubelet[2985]: E1216 03:54:42.513543 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:54:42.514842 kubelet[2985]: E1216 03:54:42.513996 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nstgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ngtdd_calico-system(d650a02a-f60f-480c-a3b2-4b144ff8489f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:54:42.515673 kubelet[2985]: E1216 03:54:42.515518 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ngtdd" podUID="d650a02a-f60f-480c-a3b2-4b144ff8489f" Dec 16 03:54:42.516086 containerd[1640]: time="2025-12-16T03:54:42.516040734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:54:42.837558 containerd[1640]: time="2025-12-16T03:54:42.837501347Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:54:42.840400 containerd[1640]: time="2025-12-16T03:54:42.840326478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:54:42.840400 containerd[1640]: time="2025-12-16T03:54:42.840362825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:54:42.840671 kubelet[2985]: E1216 03:54:42.840615 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:54:42.840828 kubelet[2985]: E1216 03:54:42.840688 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:54:42.841774 kubelet[2985]: E1216 03:54:42.841695 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-psz9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64754b8886-qt5xx_calico-apiserver(050887c2-252c-4b73-aa88-7211b3356790): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:54:42.843324 kubelet[2985]: E1216 03:54:42.843280 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-qt5xx" podUID="050887c2-252c-4b73-aa88-7211b3356790" Dec 16 03:54:43.644656 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 03:54:43.646375 kernel: audit: type=1130 audit(1765857283.639:884): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.36.234:22-139.178.89.65:49754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:43.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.36.234:22-139.178.89.65:49754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:43.640115 systemd[1]: Started sshd@23-10.230.36.234:22-139.178.89.65:49754.service - OpenSSH per-connection server daemon (139.178.89.65:49754). Dec 16 03:54:44.453832 kernel: audit: type=1101 audit(1765857284.443:885): pid=5610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:44.443000 audit[5610]: USER_ACCT pid=5610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:44.454375 sshd[5610]: Accepted publickey for core from 139.178.89.65 port 49754 ssh2: RSA SHA256:dObdFWvm8KaiFhF2HtngDpY+mgAnHUgVHhfDcIK00XY Dec 16 03:54:44.460731 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:54:44.467992 kernel: audit: type=1103 audit(1765857284.456:886): pid=5610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:44.456000 audit[5610]: CRED_ACQ pid=5610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:44.472088 kernel: audit: type=1006 audit(1765857284.456:887): pid=5610 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 03:54:44.456000 audit[5610]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4634f480 a2=3 a3=0 items=0 ppid=1 pid=5610 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:44.456000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:54:44.483214 kernel: audit: type=1300 audit(1765857284.456:887): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4634f480 a2=3 a3=0 items=0 ppid=1 pid=5610 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:54:44.483291 kernel: audit: type=1327 audit(1765857284.456:887): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:54:44.486776 systemd-logind[1614]: New session 27 of user core. Dec 16 03:54:44.489992 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 03:54:44.505256 kernel: audit: type=1105 audit(1765857284.496:888): pid=5610 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:44.496000 audit[5610]: USER_START pid=5610 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:44.510771 kernel: audit: type=1103 audit(1765857284.503:889): pid=5615 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:44.503000 audit[5615]: CRED_ACQ pid=5615 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:45.055585 sshd[5615]: Connection closed by 139.178.89.65 port 49754 Dec 16 03:54:45.056596 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Dec 16 03:54:45.057000 audit[5610]: USER_END pid=5610 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:45.067752 kernel: audit: type=1106 audit(1765857285.057:890): pid=5610 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:45.057000 audit[5610]: CRED_DISP pid=5610 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:45.071132 systemd[1]: sshd@23-10.230.36.234:22-139.178.89.65:49754.service: Deactivated successfully. Dec 16 03:54:45.078702 kernel: audit: type=1104 audit(1765857285.057:891): pid=5610 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:54:45.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.36.234:22-139.178.89.65:49754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:54:45.079132 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 03:54:45.083660 systemd-logind[1614]: Session 27 logged out. Waiting for processes to exit. Dec 16 03:54:45.085940 systemd-logind[1614]: Removed session 27. Dec 16 03:54:46.198766 containerd[1640]: time="2025-12-16T03:54:46.198688687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:54:46.518231 containerd[1640]: time="2025-12-16T03:54:46.517970788Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:54:46.519508 containerd[1640]: time="2025-12-16T03:54:46.519432208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:54:46.519508 containerd[1640]: time="2025-12-16T03:54:46.519486202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:54:46.520130 kubelet[2985]: E1216 03:54:46.520046 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:54:46.520648 kubelet[2985]: E1216 03:54:46.520157 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:54:46.520995 kubelet[2985]: E1216 03:54:46.520915 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7l2pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64754b8886-wksvf_calico-apiserver(84a83236-0b05-4630-b32a-21eabf997946): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:54:46.522228 kubelet[2985]: E1216 03:54:46.522136 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64754b8886-wksvf" podUID="84a83236-0b05-4630-b32a-21eabf997946" Dec 16 03:54:48.195404 containerd[1640]: time="2025-12-16T03:54:48.194414973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:54:48.504629 containerd[1640]: time="2025-12-16T03:54:48.504396401Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:54:48.506142 containerd[1640]: time="2025-12-16T03:54:48.505985657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:54:48.506142 containerd[1640]: time="2025-12-16T03:54:48.506094086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:54:48.506371 kubelet[2985]: E1216 03:54:48.506298 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:54:48.507103 kubelet[2985]: E1216 03:54:48.506367 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:54:48.507103 kubelet[2985]: E1216 03:54:48.506611 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrplh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4px76_calico-system(68135f19-2992-417d-b99b-f5dddddbe1d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:54:48.510614 containerd[1640]: time="2025-12-16T03:54:48.510578011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:54:48.835887 containerd[1640]: time="2025-12-16T03:54:48.835782660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:54:48.837192 containerd[1640]: time="2025-12-16T03:54:48.837114958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:54:48.837409 containerd[1640]: time="2025-12-16T03:54:48.837178867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:54:48.837831 kubelet[2985]: E1216 03:54:48.837775 2985 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:54:48.838158 kubelet[2985]: E1216 03:54:48.838022 2985 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:54:48.838800 kubelet[2985]: E1216 03:54:48.838391 2985 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrplh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4px76_calico-system(68135f19-2992-417d-b99b-f5dddddbe1d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:54:48.840080 kubelet[2985]: E1216 03:54:48.840007 2985 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4px76" podUID="68135f19-2992-417d-b99b-f5dddddbe1d3"