Dec 16 03:15:33.606637 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:18:19 -00 2025 Dec 16 03:15:33.606662 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:15:33.606674 kernel: BIOS-provided physical RAM map: Dec 16 03:15:33.606681 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 03:15:33.606687 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 03:15:33.606693 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 03:15:33.606701 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Dec 16 03:15:33.606724 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Dec 16 03:15:33.606731 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 03:15:33.606739 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 03:15:33.606745 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 03:15:33.606766 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 03:15:33.606772 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 03:15:33.606801 kernel: NX (Execute Disable) protection: active Dec 16 03:15:33.606812 kernel: APIC: Static calls initialized Dec 16 03:15:33.606819 kernel: SMBIOS 3.0.0 present. Dec 16 03:15:33.606826 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Dec 16 03:15:33.606833 kernel: DMI: Memory slots populated: 1/1 Dec 16 03:15:33.606840 kernel: Hypervisor detected: KVM Dec 16 03:15:33.606847 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 16 03:15:33.606854 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 03:15:33.606861 kernel: kvm-clock: using sched offset of 4038140179 cycles Dec 16 03:15:33.606886 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 03:15:33.606908 kernel: tsc: Detected 2445.404 MHz processor Dec 16 03:15:33.606919 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 03:15:33.606927 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 03:15:33.606981 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 16 03:15:33.606990 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 03:15:33.606998 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 03:15:33.607006 kernel: Using GB pages for direct mapping Dec 16 03:15:33.607013 kernel: ACPI: Early table checksum verification disabled Dec 16 03:15:33.607044 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Dec 16 03:15:33.607051 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:15:33.607057 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:15:33.607063 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:15:33.607070 kernel: ACPI: FACS 0x000000007CFE0000 000040 Dec 16 03:15:33.607076 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:15:33.607102 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:15:33.607110 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:15:33.607116 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:15:33.607125 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Dec 16 03:15:33.607151 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Dec 16 03:15:33.607157 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Dec 16 03:15:33.607166 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Dec 16 03:15:33.607172 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Dec 16 03:15:33.607179 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Dec 16 03:15:33.607185 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Dec 16 03:15:33.607191 kernel: No NUMA configuration found Dec 16 03:15:33.607207 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Dec 16 03:15:33.607213 kernel: NODE_DATA(0) allocated [mem 0x7cfd4dc0-0x7cfdbfff] Dec 16 03:15:33.607221 kernel: Zone ranges: Dec 16 03:15:33.607227 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 03:15:33.607234 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Dec 16 03:15:33.607240 kernel: Normal empty Dec 16 03:15:33.607247 kernel: Device empty Dec 16 03:15:33.607253 kernel: Movable zone start for each node Dec 16 03:15:33.607259 kernel: Early memory node ranges Dec 16 03:15:33.607267 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 03:15:33.607273 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Dec 16 03:15:33.607280 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Dec 16 03:15:33.607286 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 03:15:33.607293 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 03:15:33.607299 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 16 03:15:33.607306 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 03:15:33.607312 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 03:15:33.607320 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 03:15:33.607326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 03:15:33.607333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 03:15:33.607339 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 03:15:33.607346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 03:15:33.607352 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 03:15:33.607359 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 03:15:33.607366 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 03:15:33.607373 kernel: CPU topo: Max. logical packages: 1 Dec 16 03:15:33.607379 kernel: CPU topo: Max. logical dies: 1 Dec 16 03:15:33.607386 kernel: CPU topo: Max. dies per package: 1 Dec 16 03:15:33.607392 kernel: CPU topo: Max. threads per core: 1 Dec 16 03:15:33.607398 kernel: CPU topo: Num. cores per package: 2 Dec 16 03:15:33.607404 kernel: CPU topo: Num. threads per package: 2 Dec 16 03:15:33.607410 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 03:15:33.607418 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 03:15:33.607424 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 03:15:33.607431 kernel: Booting paravirtualized kernel on KVM Dec 16 03:15:33.607437 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 03:15:33.607444 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 03:15:33.607450 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 03:15:33.607457 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 03:15:33.607465 kernel: pcpu-alloc: [0] 0 1 Dec 16 03:15:33.607471 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 16 03:15:33.607478 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:15:33.607485 kernel: random: crng init done Dec 16 03:15:33.607491 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 03:15:33.607498 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 03:15:33.607506 kernel: Fallback order for Node 0: 0 Dec 16 03:15:33.607512 kernel: Built 1 zonelists, mobility grouping on. Total pages: 511866 Dec 16 03:15:33.607519 kernel: Policy zone: DMA32 Dec 16 03:15:33.607525 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 03:15:33.607532 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 03:15:33.607538 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 03:15:33.607545 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 03:15:33.607551 kernel: Dynamic Preempt: voluntary Dec 16 03:15:33.607559 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 03:15:33.607566 kernel: rcu: RCU event tracing is enabled. Dec 16 03:15:33.607573 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 03:15:33.607580 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 03:15:33.607586 kernel: Rude variant of Tasks RCU enabled. Dec 16 03:15:33.607593 kernel: Tracing variant of Tasks RCU enabled. Dec 16 03:15:33.607599 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 03:15:33.607606 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 03:15:33.607613 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:15:33.607620 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:15:33.607626 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:15:33.607633 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 03:15:33.607639 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 03:15:33.607645 kernel: Console: colour VGA+ 80x25 Dec 16 03:15:33.607653 kernel: printk: legacy console [tty0] enabled Dec 16 03:15:33.607659 kernel: printk: legacy console [ttyS0] enabled Dec 16 03:15:33.607666 kernel: ACPI: Core revision 20240827 Dec 16 03:15:33.607676 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 03:15:33.607684 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 03:15:33.607691 kernel: x2apic enabled Dec 16 03:15:33.607698 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 03:15:33.607705 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 03:15:33.607728 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fc319723, max_idle_ns: 440795258057 ns Dec 16 03:15:33.607735 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Dec 16 03:15:33.607744 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 03:15:33.607764 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 03:15:33.607771 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 03:15:33.607780 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 03:15:33.607787 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 03:15:33.607794 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 03:15:33.607800 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 03:15:33.607807 kernel: active return thunk: retbleed_return_thunk Dec 16 03:15:33.607814 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 03:15:33.607821 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 03:15:33.607829 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 03:15:33.607836 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 03:15:33.607842 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 03:15:33.607849 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 03:15:33.607856 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 03:15:33.607863 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 03:15:33.607869 kernel: Freeing SMP alternatives memory: 32K Dec 16 03:15:33.607877 kernel: pid_max: default: 32768 minimum: 301 Dec 16 03:15:33.607884 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 03:15:33.607891 kernel: landlock: Up and running. Dec 16 03:15:33.607897 kernel: SELinux: Initializing. Dec 16 03:15:33.607904 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 03:15:33.607911 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 03:15:33.607918 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 03:15:33.607926 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 03:15:33.607932 kernel: ... version: 0 Dec 16 03:15:33.607939 kernel: ... bit width: 48 Dec 16 03:15:33.607946 kernel: ... generic registers: 6 Dec 16 03:15:33.607952 kernel: ... value mask: 0000ffffffffffff Dec 16 03:15:33.607959 kernel: ... max period: 00007fffffffffff Dec 16 03:15:33.607966 kernel: ... fixed-purpose events: 0 Dec 16 03:15:33.607973 kernel: ... event mask: 000000000000003f Dec 16 03:15:33.607981 kernel: signal: max sigframe size: 1776 Dec 16 03:15:33.607987 kernel: rcu: Hierarchical SRCU implementation. Dec 16 03:15:33.607994 kernel: rcu: Max phase no-delay instances is 400. Dec 16 03:15:33.608001 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 03:15:33.608008 kernel: smp: Bringing up secondary CPUs ... Dec 16 03:15:33.608015 kernel: smpboot: x86: Booting SMP configuration: Dec 16 03:15:33.608022 kernel: .... node #0, CPUs: #1 Dec 16 03:15:33.608030 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 03:15:33.608036 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Dec 16 03:15:33.608044 kernel: Memory: 1934164K/2047464K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15556K init, 2484K bss, 108756K reserved, 0K cma-reserved) Dec 16 03:15:33.608050 kernel: devtmpfs: initialized Dec 16 03:15:33.608057 kernel: x86/mm: Memory block size: 128MB Dec 16 03:15:33.608064 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 03:15:33.608071 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 03:15:33.608079 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 03:15:33.608086 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 03:15:33.608092 kernel: audit: initializing netlink subsys (disabled) Dec 16 03:15:33.608099 kernel: audit: type=2000 audit(1765854929.398:1): state=initialized audit_enabled=0 res=1 Dec 16 03:15:33.608105 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 03:15:33.608112 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 03:15:33.608118 kernel: cpuidle: using governor menu Dec 16 03:15:33.608126 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 03:15:33.608132 kernel: dca service started, version 1.12.1 Dec 16 03:15:33.608138 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 16 03:15:33.608145 kernel: PCI: Using configuration type 1 for base access Dec 16 03:15:33.608151 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 03:15:33.608158 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 03:15:33.608164 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 03:15:33.608171 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 03:15:33.608178 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 03:15:33.608184 kernel: ACPI: Added _OSI(Module Device) Dec 16 03:15:33.608191 kernel: ACPI: Added _OSI(Processor Device) Dec 16 03:15:33.608197 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 03:15:33.608203 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 03:15:33.608210 kernel: ACPI: Interpreter enabled Dec 16 03:15:33.608217 kernel: ACPI: PM: (supports S0 S5) Dec 16 03:15:33.608224 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 03:15:33.608230 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 03:15:33.608237 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 03:15:33.608243 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 03:15:33.608250 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 03:15:33.608433 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 03:15:33.608595 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 03:15:33.608805 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 03:15:33.608825 kernel: PCI host bridge to bus 0000:00 Dec 16 03:15:33.608967 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 03:15:33.609056 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 03:15:33.609186 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 03:15:33.609264 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Dec 16 03:15:33.609335 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 03:15:33.609404 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 16 03:15:33.609473 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 03:15:33.609571 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 03:15:33.609671 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Dec 16 03:15:33.609800 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfb800000-0xfbffffff pref] Dec 16 03:15:33.609891 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfd200000-0xfd203fff 64bit pref] Dec 16 03:15:33.609981 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff] Dec 16 03:15:33.610060 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref] Dec 16 03:15:33.610139 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 03:15:33.610230 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:15:33.610309 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff] Dec 16 03:15:33.610386 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 03:15:33.610465 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 03:15:33.610544 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 03:15:33.610635 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:15:33.610799 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff] Dec 16 03:15:33.610919 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 03:15:33.611005 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 03:15:33.611154 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 03:15:33.611535 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:15:33.611734 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff] Dec 16 03:15:33.611902 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 03:15:33.612069 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 03:15:33.612209 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 03:15:33.612343 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:15:33.612471 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff] Dec 16 03:15:33.612608 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 03:15:33.612704 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 03:15:33.613200 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 03:15:33.613312 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:15:33.613400 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff] Dec 16 03:15:33.613486 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 03:15:33.613579 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 03:15:33.613687 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 03:15:33.613845 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:15:33.613965 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff] Dec 16 03:15:33.614053 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 03:15:33.614137 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 03:15:33.614257 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 03:15:33.614435 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:15:33.614537 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff] Dec 16 03:15:33.614622 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 03:15:33.614721 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 03:15:33.614861 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 03:15:33.614957 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:15:33.615048 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff] Dec 16 03:15:33.615152 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 03:15:33.615244 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 16 03:15:33.615329 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 03:15:33.615430 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 03:15:33.615519 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff] Dec 16 03:15:33.615601 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 03:15:33.615682 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 03:15:33.615814 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 03:15:33.615909 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 03:15:33.616007 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 03:15:33.616163 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 03:15:33.616276 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc040-0xc05f] Dec 16 03:15:33.616427 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea1a000-0xfea1afff] Dec 16 03:15:33.616640 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 03:15:33.616776 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 16 03:15:33.616897 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Dec 16 03:15:33.621092 kernel: pci 0000:01:00.0: BAR 1 [mem 0xfe880000-0xfe880fff] Dec 16 03:15:33.621230 kernel: pci 0000:01:00.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Dec 16 03:15:33.621326 kernel: pci 0000:01:00.0: ROM [mem 0xfe800000-0xfe87ffff pref] Dec 16 03:15:33.621413 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 03:15:33.621513 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 16 03:15:33.621599 kernel: pci 0000:02:00.0: BAR 0 [mem 0xfe600000-0xfe603fff 64bit] Dec 16 03:15:33.621681 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 03:15:33.621815 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Dec 16 03:15:33.621903 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe400000-0xfe400fff] Dec 16 03:15:33.621989 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfcc00000-0xfcc03fff 64bit pref] Dec 16 03:15:33.622067 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 03:15:33.622155 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Dec 16 03:15:33.622237 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Dec 16 03:15:33.622314 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 03:15:33.622401 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 16 03:15:33.622485 kernel: pci 0000:05:00.0: BAR 1 [mem 0xfe000000-0xfe000fff] Dec 16 03:15:33.622565 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfc800000-0xfc803fff 64bit pref] Dec 16 03:15:33.622645 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 03:15:33.624417 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Dec 16 03:15:33.624520 kernel: pci 0000:06:00.0: BAR 1 [mem 0xfde00000-0xfde00fff] Dec 16 03:15:33.624614 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfc600000-0xfc603fff 64bit pref] Dec 16 03:15:33.624727 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 03:15:33.624740 kernel: acpiphp: Slot [0] registered Dec 16 03:15:33.624852 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Dec 16 03:15:33.624966 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfdc80000-0xfdc80fff] Dec 16 03:15:33.625053 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfc400000-0xfc403fff 64bit pref] Dec 16 03:15:33.625140 kernel: pci 0000:07:00.0: ROM [mem 0xfdc00000-0xfdc7ffff pref] Dec 16 03:15:33.625220 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 03:15:33.625230 kernel: acpiphp: Slot [0-2] registered Dec 16 03:15:33.625309 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 03:15:33.625319 kernel: acpiphp: Slot [0-3] registered Dec 16 03:15:33.625395 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 03:15:33.625411 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 03:15:33.625424 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 03:15:33.625431 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 03:15:33.625438 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 03:15:33.625444 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 03:15:33.625451 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 03:15:33.625457 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 03:15:33.625466 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 03:15:33.625472 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 03:15:33.625479 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 03:15:33.625486 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 03:15:33.625492 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 03:15:33.625499 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 03:15:33.625506 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 03:15:33.625514 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 03:15:33.625520 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 03:15:33.625527 kernel: iommu: Default domain type: Translated Dec 16 03:15:33.625533 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 03:15:33.625540 kernel: PCI: Using ACPI for IRQ routing Dec 16 03:15:33.625546 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 03:15:33.625553 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 03:15:33.625561 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Dec 16 03:15:33.625647 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 03:15:33.625793 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 03:15:33.625879 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 03:15:33.625889 kernel: vgaarb: loaded Dec 16 03:15:33.625896 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 03:15:33.625903 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 03:15:33.625913 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 03:15:33.625920 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 03:15:33.625927 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 03:15:33.625933 kernel: pnp: PnP ACPI init Dec 16 03:15:33.626022 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 03:15:33.626034 kernel: pnp: PnP ACPI: found 5 devices Dec 16 03:15:33.626041 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 03:15:33.626050 kernel: NET: Registered PF_INET protocol family Dec 16 03:15:33.626057 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 03:15:33.626064 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 03:15:33.626071 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 03:15:33.626078 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 03:15:33.626084 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 03:15:33.626091 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 03:15:33.626099 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 03:15:33.626105 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 03:15:33.626112 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 03:15:33.626119 kernel: NET: Registered PF_XDP protocol family Dec 16 03:15:33.626197 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 03:15:33.626276 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 03:15:33.626370 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 03:15:33.626454 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Dec 16 03:15:33.626533 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Dec 16 03:15:33.626622 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Dec 16 03:15:33.626767 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 03:15:33.626877 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 03:15:33.626999 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 03:15:33.627084 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 03:15:33.627168 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 03:15:33.627246 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 03:15:33.627324 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 03:15:33.627490 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 03:15:33.627600 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 03:15:33.627685 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 03:15:33.627803 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 03:15:33.628390 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 03:15:33.628475 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 03:15:33.628554 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 03:15:33.628632 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 03:15:33.628725 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 03:15:33.628834 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 03:15:33.628931 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 03:15:33.629014 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 03:15:33.629092 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Dec 16 03:15:33.629170 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 03:15:33.629280 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 03:15:33.629364 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 03:15:33.629442 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Dec 16 03:15:33.629519 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 16 03:15:33.629597 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 03:15:33.629675 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 03:15:33.629782 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Dec 16 03:15:33.629883 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 03:15:33.629966 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 03:15:33.630069 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 03:15:33.630145 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 03:15:33.630215 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 03:15:33.630285 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Dec 16 03:15:33.630365 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 03:15:33.630441 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 16 03:15:33.630520 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 16 03:15:33.630609 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 03:15:33.630690 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 16 03:15:33.630796 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 03:15:33.630931 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 16 03:15:33.631009 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 03:15:33.631087 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 16 03:15:33.631159 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 03:15:33.631236 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 16 03:15:33.631312 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 03:15:33.631388 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Dec 16 03:15:33.631473 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 03:15:33.631565 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Dec 16 03:15:33.631644 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 16 03:15:33.631742 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 03:15:33.631841 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Dec 16 03:15:33.631945 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Dec 16 03:15:33.632019 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 03:15:33.632098 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Dec 16 03:15:33.632170 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 16 03:15:33.632245 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 03:15:33.632256 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 03:15:33.632263 kernel: PCI: CLS 0 bytes, default 64 Dec 16 03:15:33.632271 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fc319723, max_idle_ns: 440795258057 ns Dec 16 03:15:33.632278 kernel: Initialise system trusted keyrings Dec 16 03:15:33.632285 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 03:15:33.632292 kernel: Key type asymmetric registered Dec 16 03:15:33.632302 kernel: Asymmetric key parser 'x509' registered Dec 16 03:15:33.632309 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 03:15:33.632316 kernel: io scheduler mq-deadline registered Dec 16 03:15:33.632323 kernel: io scheduler kyber registered Dec 16 03:15:33.632330 kernel: io scheduler bfq registered Dec 16 03:15:33.632408 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 16 03:15:33.632492 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 16 03:15:33.632587 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 16 03:15:33.632667 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 16 03:15:33.632817 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 16 03:15:33.632949 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 16 03:15:33.633039 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 16 03:15:33.633131 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 16 03:15:33.633233 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 16 03:15:33.633346 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 16 03:15:33.633434 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 16 03:15:33.633517 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 16 03:15:33.633599 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 16 03:15:33.633703 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 16 03:15:33.633830 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 16 03:15:33.633920 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 16 03:15:33.633931 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 03:15:33.634011 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 16 03:15:33.634095 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 16 03:15:33.634106 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 03:15:33.634117 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 16 03:15:33.634124 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 03:15:33.634131 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 03:15:33.634139 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 03:15:33.634146 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 03:15:33.634153 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 03:15:33.634243 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 03:15:33.634258 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 03:15:33.634336 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 03:15:33.634414 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T03:15:31 UTC (1765854931) Dec 16 03:15:33.634491 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 16 03:15:33.634502 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 03:15:33.634509 kernel: NET: Registered PF_INET6 protocol family Dec 16 03:15:33.634519 kernel: Segment Routing with IPv6 Dec 16 03:15:33.634528 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 03:15:33.634541 kernel: NET: Registered PF_PACKET protocol family Dec 16 03:15:33.634552 kernel: Key type dns_resolver registered Dec 16 03:15:33.634559 kernel: IPI shorthand broadcast: enabled Dec 16 03:15:33.634566 kernel: sched_clock: Marking stable (2313027685, 254890392)->(2608365465, -40447388) Dec 16 03:15:33.634573 kernel: registered taskstats version 1 Dec 16 03:15:33.634581 kernel: Loading compiled-in X.509 certificates Dec 16 03:15:33.634588 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: aafd1eb27ea805b8231c3bede9210239fae84df8' Dec 16 03:15:33.634595 kernel: Demotion targets for Node 0: null Dec 16 03:15:33.634602 kernel: Key type .fscrypt registered Dec 16 03:15:33.634609 kernel: Key type fscrypt-provisioning registered Dec 16 03:15:33.634616 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 03:15:33.634623 kernel: ima: Allocated hash algorithm: sha1 Dec 16 03:15:33.634631 kernel: ima: No architecture policies found Dec 16 03:15:33.634638 kernel: clk: Disabling unused clocks Dec 16 03:15:33.634645 kernel: Freeing unused kernel image (initmem) memory: 15556K Dec 16 03:15:33.634652 kernel: Write protecting the kernel read-only data: 47104k Dec 16 03:15:33.634659 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 16 03:15:33.634666 kernel: Run /init as init process Dec 16 03:15:33.634673 kernel: with arguments: Dec 16 03:15:33.634681 kernel: /init Dec 16 03:15:33.634688 kernel: with environment: Dec 16 03:15:33.634695 kernel: HOME=/ Dec 16 03:15:33.634701 kernel: TERM=linux Dec 16 03:15:33.634721 kernel: ACPI: bus type USB registered Dec 16 03:15:33.634730 kernel: usbcore: registered new interface driver usbfs Dec 16 03:15:33.634737 kernel: usbcore: registered new interface driver hub Dec 16 03:15:33.634744 kernel: usbcore: registered new device driver usb Dec 16 03:15:33.634895 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 03:15:33.634985 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 16 03:15:33.635083 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 03:15:33.635174 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 03:15:33.635262 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 16 03:15:33.635347 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 16 03:15:33.635461 kernel: hub 1-0:1.0: USB hub found Dec 16 03:15:33.635556 kernel: hub 1-0:1.0: 4 ports detected Dec 16 03:15:33.635658 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 03:15:33.635818 kernel: hub 2-0:1.0: USB hub found Dec 16 03:15:33.635925 kernel: hub 2-0:1.0: 4 ports detected Dec 16 03:15:33.635940 kernel: SCSI subsystem initialized Dec 16 03:15:33.635948 kernel: libata version 3.00 loaded. Dec 16 03:15:33.636035 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 03:15:33.636046 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 03:15:33.636128 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 03:15:33.636213 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 03:15:33.636340 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 03:15:33.636462 kernel: scsi host0: ahci Dec 16 03:15:33.636557 kernel: scsi host1: ahci Dec 16 03:15:33.636648 kernel: scsi host2: ahci Dec 16 03:15:33.636796 kernel: scsi host3: ahci Dec 16 03:15:33.636902 kernel: scsi host4: ahci Dec 16 03:15:33.636991 kernel: scsi host5: ahci Dec 16 03:15:33.637002 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 38 lpm-pol 1 Dec 16 03:15:33.637010 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 38 lpm-pol 1 Dec 16 03:15:33.637017 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 38 lpm-pol 1 Dec 16 03:15:33.637024 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 38 lpm-pol 1 Dec 16 03:15:33.637035 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 38 lpm-pol 1 Dec 16 03:15:33.637042 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 38 lpm-pol 1 Dec 16 03:15:33.637147 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 03:15:33.637162 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 03:15:33.637169 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 03:15:33.637176 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 03:15:33.637183 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 03:15:33.637192 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 03:15:33.637199 kernel: ata1.00: LPM support broken, forcing max_power Dec 16 03:15:33.637206 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 03:15:33.637213 kernel: ata1.00: applying bridge limits Dec 16 03:15:33.637220 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 03:15:33.637227 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 03:15:33.637234 kernel: ata1.00: LPM support broken, forcing max_power Dec 16 03:15:33.637242 kernel: ata1.00: configured for UDMA/100 Dec 16 03:15:33.637347 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 03:15:33.637359 kernel: usbcore: registered new interface driver usbhid Dec 16 03:15:33.637366 kernel: usbhid: USB HID core driver Dec 16 03:15:33.637455 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 03:15:33.637466 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 03:15:33.637560 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Dec 16 03:15:33.637653 kernel: scsi host6: Virtio SCSI HBA Dec 16 03:15:33.644316 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 16 03:15:33.644432 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Dec 16 03:15:33.644451 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 16 03:15:33.644566 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 16 03:15:33.644666 kernel: sd 6:0:0:0: Power-on or device reset occurred Dec 16 03:15:33.644815 kernel: sd 6:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 16 03:15:33.644951 kernel: sd 6:0:0:0: [sda] Write Protect is off Dec 16 03:15:33.645066 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 16 03:15:33.645185 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 16 03:15:33.645202 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 03:15:33.645210 kernel: GPT:25804799 != 80003071 Dec 16 03:15:33.645217 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 03:15:33.645224 kernel: GPT:25804799 != 80003071 Dec 16 03:15:33.645230 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 03:15:33.645237 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 03:15:33.645330 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Dec 16 03:15:33.645344 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 03:15:33.645352 kernel: device-mapper: uevent: version 1.0.3 Dec 16 03:15:33.645359 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 03:15:33.645366 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 03:15:33.645374 kernel: raid6: avx2x4 gen() 19086 MB/s Dec 16 03:15:33.645381 kernel: raid6: avx2x2 gen() 25321 MB/s Dec 16 03:15:33.645388 kernel: raid6: avx2x1 gen() 25598 MB/s Dec 16 03:15:33.645396 kernel: raid6: using algorithm avx2x1 gen() 25598 MB/s Dec 16 03:15:33.645403 kernel: raid6: .... xor() 26511 MB/s, rmw enabled Dec 16 03:15:33.645410 kernel: raid6: using avx2x2 recovery algorithm Dec 16 03:15:33.645417 kernel: xor: automatically using best checksumming function avx Dec 16 03:15:33.645424 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 03:15:33.645431 kernel: BTRFS: device fsid 57a8262f-2900-48ba-a17e-aafbd70d59c7 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (185) Dec 16 03:15:33.645439 kernel: BTRFS info (device dm-0): first mount of filesystem 57a8262f-2900-48ba-a17e-aafbd70d59c7 Dec 16 03:15:33.645447 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:15:33.645454 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 03:15:33.645461 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 03:15:33.645468 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 03:15:33.645475 kernel: loop: module loaded Dec 16 03:15:33.645483 kernel: loop0: detected capacity change from 0 to 100528 Dec 16 03:15:33.645490 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 03:15:33.645499 systemd[1]: Successfully made /usr/ read-only. Dec 16 03:15:33.645510 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:15:33.645518 systemd[1]: Detected virtualization kvm. Dec 16 03:15:33.645525 systemd[1]: Detected architecture x86-64. Dec 16 03:15:33.645532 systemd[1]: Running in initrd. Dec 16 03:15:33.645540 systemd[1]: No hostname configured, using default hostname. Dec 16 03:15:33.645548 systemd[1]: Hostname set to . Dec 16 03:15:33.645556 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:15:33.645564 systemd[1]: Queued start job for default target initrd.target. Dec 16 03:15:33.645571 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:15:33.645579 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:15:33.645587 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:15:33.645595 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 03:15:33.645603 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:15:33.645614 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 03:15:33.645627 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 03:15:33.645635 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:15:33.645643 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:15:33.645652 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:15:33.645659 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:15:33.645667 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:15:33.645675 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:15:33.645682 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:15:33.645689 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:15:33.645697 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:15:33.645705 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:15:33.645725 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 03:15:33.645733 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 03:15:33.645740 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:15:33.645748 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:15:33.647788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:15:33.647798 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:15:33.647809 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 03:15:33.647817 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 03:15:33.647825 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:15:33.647833 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 03:15:33.647841 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 03:15:33.647848 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 03:15:33.647857 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:15:33.647864 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:15:33.647872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:15:33.647880 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 03:15:33.647893 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:15:33.647907 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 03:15:33.647915 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:15:33.647944 systemd-journald[322]: Collecting audit messages is enabled. Dec 16 03:15:33.647966 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 03:15:33.647974 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:15:33.647983 kernel: audit: type=1130 audit(1765854933.636:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.647991 systemd-journald[322]: Journal started Dec 16 03:15:33.648010 systemd-journald[322]: Runtime Journal (/run/log/journal/998ceb00ebae4628973b1c7d89969dc2) is 4.7M, max 38.1M, 33.4M free. Dec 16 03:15:33.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.649382 systemd-modules-load[323]: Inserted module 'br_netfilter' Dec 16 03:15:33.725731 kernel: Bridge firewalling registered Dec 16 03:15:33.725782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:15:33.728808 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:15:33.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.731037 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:15:33.737444 kernel: audit: type=1130 audit(1765854933.728:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.745925 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:15:33.746993 kernel: audit: type=1130 audit(1765854933.737:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.755055 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:15:33.761439 kernel: audit: type=1130 audit(1765854933.746:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.761459 kernel: audit: type=1130 audit(1765854933.755:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.758579 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 03:15:33.773860 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:15:33.776859 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:15:33.789106 systemd-tmpfiles[342]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 03:15:33.799531 kernel: audit: type=1130 audit(1765854933.790:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.790818 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:15:33.798860 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 03:15:33.801800 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:15:33.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.811804 kernel: audit: type=1130 audit(1765854933.803:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.811186 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:15:33.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.815838 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:15:33.823967 kernel: audit: type=1130 audit(1765854933.811:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.823988 kernel: audit: type=1334 audit(1765854933.812:10): prog-id=6 op=LOAD Dec 16 03:15:33.812000 audit: BPF prog-id=6 op=LOAD Dec 16 03:15:33.825177 dracut-cmdline[355]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:15:33.862596 systemd-resolved[365]: Positive Trust Anchors: Dec 16 03:15:33.862606 systemd-resolved[365]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:15:33.862609 systemd-resolved[365]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:15:33.862633 systemd-resolved[365]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:15:33.896307 systemd-resolved[365]: Defaulting to hostname 'linux'. Dec 16 03:15:33.898148 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:15:33.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.899856 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:15:33.902207 kernel: Loading iSCSI transport class v2.0-870. Dec 16 03:15:33.915776 kernel: iscsi: registered transport (tcp) Dec 16 03:15:33.934849 kernel: iscsi: registered transport (qla4xxx) Dec 16 03:15:33.934884 kernel: QLogic iSCSI HBA Driver Dec 16 03:15:33.956427 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:15:33.969161 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:15:33.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.971828 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:15:34.002017 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 03:15:34.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:34.003850 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 03:15:34.008855 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 03:15:34.030242 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:15:34.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:34.032000 audit: BPF prog-id=7 op=LOAD Dec 16 03:15:34.032000 audit: BPF prog-id=8 op=LOAD Dec 16 03:15:34.033904 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:15:34.057922 systemd-udevd[604]: Using default interface naming scheme 'v257'. Dec 16 03:15:34.069223 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:15:34.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:34.071149 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:15:34.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:34.074244 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 03:15:34.075000 audit: BPF prog-id=9 op=LOAD Dec 16 03:15:34.077845 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:15:34.098255 dracut-pre-trigger[702]: rd.md=0: removing MD RAID activation Dec 16 03:15:34.118691 systemd-networkd[703]: lo: Link UP Dec 16 03:15:34.118698 systemd-networkd[703]: lo: Gained carrier Dec 16 03:15:34.120623 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:15:34.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:34.121470 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:15:34.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:34.123074 systemd[1]: Reached target network.target - Network. Dec 16 03:15:34.124919 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:15:34.179217 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:15:34.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:34.184360 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 03:15:34.282915 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 03:15:34.304008 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 16 03:15:34.319831 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 03:15:34.320105 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 16 03:15:34.334242 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 16 03:15:34.333603 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 16 03:15:34.340231 kernel: AES CTR mode by8 optimization enabled Dec 16 03:15:34.352450 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 03:15:34.363658 systemd-networkd[703]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:15:34.363668 systemd-networkd[703]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:15:34.365424 systemd-networkd[703]: eth1: Link UP Dec 16 03:15:34.365566 systemd-networkd[703]: eth1: Gained carrier Dec 16 03:15:34.365575 systemd-networkd[703]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:15:34.370817 systemd-networkd[703]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:15:34.370820 systemd-networkd[703]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:15:34.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:34.373334 systemd-networkd[703]: eth0: Link UP Dec 16 03:15:34.373999 systemd-networkd[703]: eth0: Gained carrier Dec 16 03:15:34.374007 systemd-networkd[703]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:15:34.374484 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:15:34.374573 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:15:34.380867 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:15:34.385444 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:15:34.394790 disk-uuid[839]: Primary Header is updated. Dec 16 03:15:34.394790 disk-uuid[839]: Secondary Entries is updated. Dec 16 03:15:34.394790 disk-uuid[839]: Secondary Header is updated. Dec 16 03:15:34.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:34.396030 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 03:15:34.397100 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:15:34.397801 systemd-networkd[703]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Dec 16 03:15:34.398547 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:15:34.402226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:15:34.409413 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 03:15:34.438812 systemd-networkd[703]: eth0: DHCPv4 address 65.108.246.88/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 16 03:15:34.534334 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:15:34.543640 kernel: kauditd_printk_skb: 14 callbacks suppressed Dec 16 03:15:34.543662 kernel: audit: type=1130 audit(1765854934.534:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:34.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:34.549696 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:15:34.556882 kernel: audit: type=1130 audit(1765854934.549:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:34.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.452867 disk-uuid[845]: Warning: The kernel is still using the old partition table. Dec 16 03:15:35.452867 disk-uuid[845]: The new table will be used at the next reboot or after you Dec 16 03:15:35.452867 disk-uuid[845]: run partprobe(8) or kpartx(8) Dec 16 03:15:35.452867 disk-uuid[845]: The operation has completed successfully. Dec 16 03:15:35.458320 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 03:15:35.483422 kernel: audit: type=1130 audit(1765854935.458:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.483462 kernel: audit: type=1131 audit(1765854935.458:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.458444 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 03:15:35.461951 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 03:15:35.526824 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (868) Dec 16 03:15:35.547620 kernel: BTRFS info (device sda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:15:35.547681 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:15:35.564790 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 03:15:35.564847 kernel: BTRFS info (device sda6): turning on async discard Dec 16 03:15:35.564869 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 03:15:35.583789 kernel: BTRFS info (device sda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:15:35.584710 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 03:15:35.600154 kernel: audit: type=1130 audit(1765854935.585:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.590032 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 03:15:35.786450 ignition[887]: Ignition 2.24.0 Dec 16 03:15:35.786476 ignition[887]: Stage: fetch-offline Dec 16 03:15:35.786542 ignition[887]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:15:35.808299 kernel: audit: type=1130 audit(1765854935.790:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.789899 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:15:35.786558 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 03:15:35.793943 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 03:15:35.786674 ignition[887]: parsed url from cmdline: "" Dec 16 03:15:35.808013 systemd-networkd[703]: eth0: Gained IPv6LL Dec 16 03:15:35.786679 ignition[887]: no config URL provided Dec 16 03:15:35.786686 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:15:35.786703 ignition[887]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:15:35.786741 ignition[887]: failed to fetch config: resource requires networking Dec 16 03:15:35.787224 ignition[887]: Ignition finished successfully Dec 16 03:15:35.829515 ignition[893]: Ignition 2.24.0 Dec 16 03:15:35.829528 ignition[893]: Stage: fetch Dec 16 03:15:35.829672 ignition[893]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:15:35.829681 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 03:15:35.829793 ignition[893]: parsed url from cmdline: "" Dec 16 03:15:35.829798 ignition[893]: no config URL provided Dec 16 03:15:35.829807 ignition[893]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:15:35.829817 ignition[893]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:15:35.829883 ignition[893]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 16 03:15:35.839094 ignition[893]: GET result: OK Dec 16 03:15:35.839191 ignition[893]: parsing config with SHA512: 5b8c4fba63f24dc0a259c627f93bbd2959cfb8e3c5c6196b1e1d8e989b197e1063d5180de1f7207ddddd7031c4b6e56f1cd4e503a4291bd96112eb41e06fbbbf Dec 16 03:15:35.844795 unknown[893]: fetched base config from "system" Dec 16 03:15:35.845262 ignition[893]: fetch: fetch complete Dec 16 03:15:35.844810 unknown[893]: fetched base config from "system" Dec 16 03:15:35.858436 kernel: audit: type=1130 audit(1765854935.848:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.845270 ignition[893]: fetch: fetch passed Dec 16 03:15:35.844818 unknown[893]: fetched user config from "hetzner" Dec 16 03:15:35.845324 ignition[893]: Ignition finished successfully Dec 16 03:15:35.848261 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 03:15:35.850632 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 03:15:35.871889 systemd-networkd[703]: eth1: Gained IPv6LL Dec 16 03:15:35.876103 ignition[899]: Ignition 2.24.0 Dec 16 03:15:35.876116 ignition[899]: Stage: kargs Dec 16 03:15:35.876241 ignition[899]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:15:35.878596 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 03:15:35.887971 kernel: audit: type=1130 audit(1765854935.879:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.876249 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 03:15:35.882863 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 03:15:35.876881 ignition[899]: kargs: kargs passed Dec 16 03:15:35.876915 ignition[899]: Ignition finished successfully Dec 16 03:15:35.904632 ignition[905]: Ignition 2.24.0 Dec 16 03:15:35.904646 ignition[905]: Stage: disks Dec 16 03:15:35.904826 ignition[905]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:15:35.906427 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 03:15:35.916035 kernel: audit: type=1130 audit(1765854935.907:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.904834 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 03:15:35.908288 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 03:15:35.905554 ignition[905]: disks: disks passed Dec 16 03:15:35.916831 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 03:15:35.905589 ignition[905]: Ignition finished successfully Dec 16 03:15:35.918634 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:15:35.920375 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:15:35.922113 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:15:35.924895 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 03:15:35.957653 systemd-fsck[913]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 03:15:35.959443 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 03:15:35.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:35.963822 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 03:15:35.971408 kernel: audit: type=1130 audit(1765854935.960:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:36.105802 kernel: EXT4-fs (sda9): mounted filesystem 1314c107-11a5-486b-9d52-be9f57b6bf1b r/w with ordered data mode. Quota mode: none. Dec 16 03:15:36.105696 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 03:15:36.107954 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 03:15:36.113305 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:15:36.117999 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 03:15:36.131941 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 03:15:36.136597 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 03:15:36.136651 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:15:36.143834 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 03:15:36.156811 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (922) Dec 16 03:15:36.164005 kernel: BTRFS info (device sda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:15:36.163943 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 03:15:36.174524 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:15:36.182597 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 03:15:36.182649 kernel: BTRFS info (device sda6): turning on async discard Dec 16 03:15:36.187410 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 03:15:36.193930 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:15:36.252256 coreos-metadata[924]: Dec 16 03:15:36.251 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 16 03:15:36.254106 coreos-metadata[924]: Dec 16 03:15:36.253 INFO Fetch successful Dec 16 03:15:36.255828 coreos-metadata[924]: Dec 16 03:15:36.254 INFO wrote hostname ci-4547-0-0-6-1137cb7bd3 to /sysroot/etc/hostname Dec 16 03:15:36.258420 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 03:15:36.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:36.421827 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 03:15:36.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:36.424836 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 03:15:36.428967 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 03:15:36.444655 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 03:15:36.451819 kernel: BTRFS info (device sda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:15:36.475885 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 03:15:36.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:36.479417 ignition[1024]: INFO : Ignition 2.24.0 Dec 16 03:15:36.479417 ignition[1024]: INFO : Stage: mount Dec 16 03:15:36.479417 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:15:36.479417 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 03:15:36.479417 ignition[1024]: INFO : mount: mount passed Dec 16 03:15:36.479417 ignition[1024]: INFO : Ignition finished successfully Dec 16 03:15:36.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:36.480525 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 03:15:36.485833 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 03:15:37.108130 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:15:37.125790 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1035) Dec 16 03:15:37.130412 kernel: BTRFS info (device sda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:15:37.130462 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:15:37.139349 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 03:15:37.139399 kernel: BTRFS info (device sda6): turning on async discard Dec 16 03:15:37.143394 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 03:15:37.145895 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:15:37.175683 ignition[1051]: INFO : Ignition 2.24.0 Dec 16 03:15:37.175683 ignition[1051]: INFO : Stage: files Dec 16 03:15:37.177401 ignition[1051]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:15:37.177401 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 03:15:37.177401 ignition[1051]: DEBUG : files: compiled without relabeling support, skipping Dec 16 03:15:37.180038 ignition[1051]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 03:15:37.180038 ignition[1051]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 03:15:37.181847 ignition[1051]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 03:15:37.181847 ignition[1051]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 03:15:37.181847 ignition[1051]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 03:15:37.181701 unknown[1051]: wrote ssh authorized keys file for user: core Dec 16 03:15:37.185734 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 03:15:37.186824 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 03:15:37.385634 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 03:15:37.686251 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 03:15:37.686251 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 03:15:37.689456 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 03:15:37.689456 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:15:37.689456 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:15:37.689456 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:15:37.689456 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:15:37.689456 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:15:37.689456 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:15:37.689456 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:15:37.689456 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:15:37.700138 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:15:37.700138 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:15:37.700138 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:15:37.700138 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 03:15:38.298366 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 03:15:41.906794 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:15:41.906794 ignition[1051]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 03:15:41.912835 ignition[1051]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:15:41.915377 ignition[1051]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:15:41.915377 ignition[1051]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 03:15:41.915377 ignition[1051]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 03:15:41.915377 ignition[1051]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 03:15:41.915377 ignition[1051]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 03:15:41.915377 ignition[1051]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 03:15:41.915377 ignition[1051]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 16 03:15:41.915377 ignition[1051]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 03:15:41.960141 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 16 03:15:41.960183 kernel: audit: type=1130 audit(1765854941.929:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:41.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:41.923120 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 03:15:41.965168 ignition[1051]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:15:41.965168 ignition[1051]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:15:41.965168 ignition[1051]: INFO : files: files passed Dec 16 03:15:41.965168 ignition[1051]: INFO : Ignition finished successfully Dec 16 03:15:42.015998 kernel: audit: type=1130 audit(1765854941.971:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.016047 kernel: audit: type=1131 audit(1765854941.971:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.016068 kernel: audit: type=1130 audit(1765854942.000:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:41.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:41.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:41.931881 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 03:15:41.957648 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 03:15:41.966032 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 03:15:42.025419 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:15:41.969262 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 03:15:42.029561 initrd-setup-root-after-ignition[1082]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:15:42.029561 initrd-setup-root-after-ignition[1082]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:15:41.996582 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:15:42.001046 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 03:15:42.017875 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 03:15:42.072711 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 03:15:42.072899 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 03:15:42.102571 kernel: audit: type=1130 audit(1765854942.075:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.102604 kernel: audit: type=1131 audit(1765854942.075:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.076465 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 03:15:42.104154 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 03:15:42.107604 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 03:15:42.109991 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 03:15:42.154823 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:15:42.170818 kernel: audit: type=1130 audit(1765854942.157:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.160112 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 03:15:42.193101 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:15:42.195916 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:15:42.197474 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:15:42.200957 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 03:15:42.204688 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 03:15:42.222800 kernel: audit: type=1131 audit(1765854942.207:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.204891 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:15:42.221990 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 03:15:42.224255 systemd[1]: Stopped target basic.target - Basic System. Dec 16 03:15:42.227703 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 03:15:42.230887 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:15:42.234110 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 03:15:42.237564 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:15:42.240997 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 03:15:42.244383 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:15:42.248083 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 03:15:42.251384 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 03:15:42.255044 systemd[1]: Stopped target swap.target - Swaps. Dec 16 03:15:42.276720 kernel: audit: type=1131 audit(1765854942.261:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.258842 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 03:15:42.258948 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:15:42.276022 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:15:42.278524 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:15:42.281587 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 03:15:42.306086 kernel: audit: type=1131 audit(1765854942.289:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.284639 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:15:42.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.287193 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 03:15:42.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.287314 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 03:15:42.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.305418 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 03:15:42.305539 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:15:42.307798 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 03:15:42.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.307917 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 03:15:42.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.311579 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 03:15:42.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.311663 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 03:15:42.316093 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 03:15:42.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.319934 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 03:15:42.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.322103 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 03:15:42.322222 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:15:42.327833 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 03:15:42.329061 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:15:42.331702 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 03:15:42.331905 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:15:42.337015 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 03:15:42.337148 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 03:15:42.370615 ignition[1107]: INFO : Ignition 2.24.0 Dec 16 03:15:42.372669 ignition[1107]: INFO : Stage: umount Dec 16 03:15:42.372669 ignition[1107]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:15:42.372669 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 03:15:42.378497 ignition[1107]: INFO : umount: umount passed Dec 16 03:15:42.378497 ignition[1107]: INFO : Ignition finished successfully Dec 16 03:15:42.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.378309 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 03:15:42.379623 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 03:15:42.383549 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 03:15:42.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.385522 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 03:15:42.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.385642 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 03:15:42.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.388409 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 03:15:42.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.388498 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 03:15:42.390280 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 03:15:42.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.390338 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 03:15:42.392382 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 03:15:42.392435 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 03:15:42.394491 systemd[1]: Stopped target network.target - Network. Dec 16 03:15:42.396553 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 03:15:42.396655 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:15:42.398565 systemd[1]: Stopped target paths.target - Path Units. Dec 16 03:15:42.400181 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 03:15:42.405205 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:15:42.406662 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 03:15:42.408416 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 03:15:42.410464 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 03:15:42.410502 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:15:42.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.413375 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 03:15:42.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.413423 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:15:42.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.415655 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 03:15:42.415691 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:15:42.418177 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 03:15:42.418252 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 03:15:42.420544 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 03:15:42.420609 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 03:15:42.422900 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 03:15:42.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.422968 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 03:15:42.425652 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 03:15:42.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.457000 audit: BPF prog-id=9 op=UNLOAD Dec 16 03:15:42.428667 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 03:15:42.460000 audit: BPF prog-id=6 op=UNLOAD Dec 16 03:15:42.443105 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 03:15:42.443264 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 03:15:42.454631 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 03:15:42.454843 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 03:15:42.461070 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 03:15:42.463616 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 03:15:42.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.463675 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:15:42.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.467317 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 03:15:42.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.469404 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 03:15:42.469477 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:15:42.474856 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 03:15:42.474938 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:15:42.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.477355 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 03:15:42.477432 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 03:15:42.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.480344 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:15:42.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.489423 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 03:15:42.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.489569 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:15:42.494961 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 03:15:42.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.495043 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 03:15:42.496490 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 03:15:42.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.496550 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:15:42.499909 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 03:15:42.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.500014 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:15:42.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.502632 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 03:15:42.502706 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 03:15:42.505876 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 03:15:42.505950 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:15:42.510379 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 03:15:42.513707 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 03:15:42.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.513850 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:15:42.518608 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 03:15:42.518671 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:15:42.521813 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 03:15:42.521868 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:15:42.524022 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 03:15:42.524089 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:15:42.525889 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:15:42.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:42.525959 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:15:42.538515 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 03:15:42.538637 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 03:15:42.551333 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 03:15:42.551545 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 03:15:42.554550 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 03:15:42.558167 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 03:15:42.589314 systemd[1]: Switching root. Dec 16 03:15:42.632853 systemd-journald[322]: Journal stopped Dec 16 03:15:43.598593 systemd-journald[322]: Received SIGTERM from PID 1 (systemd). Dec 16 03:15:43.598634 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 03:15:43.598648 kernel: SELinux: policy capability open_perms=1 Dec 16 03:15:43.598657 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 03:15:43.598674 kernel: SELinux: policy capability always_check_network=0 Dec 16 03:15:43.598683 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 03:15:43.598693 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 03:15:43.598704 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 03:15:43.598714 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 03:15:43.598737 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 03:15:43.598765 systemd[1]: Successfully loaded SELinux policy in 66.757ms. Dec 16 03:15:43.598783 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.648ms. Dec 16 03:15:43.598793 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:15:43.598805 systemd[1]: Detected virtualization kvm. Dec 16 03:15:43.598814 systemd[1]: Detected architecture x86-64. Dec 16 03:15:43.598823 systemd[1]: Detected first boot. Dec 16 03:15:43.598832 systemd[1]: Hostname set to . Dec 16 03:15:43.598842 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:15:43.598852 zram_generator::config[1150]: No configuration found. Dec 16 03:15:43.598864 kernel: Guest personality initialized and is inactive Dec 16 03:15:43.598873 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 03:15:43.598882 kernel: Initialized host personality Dec 16 03:15:43.598890 kernel: NET: Registered PF_VSOCK protocol family Dec 16 03:15:43.598899 systemd[1]: Populated /etc with preset unit settings. Dec 16 03:15:43.598909 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 03:15:43.598918 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 03:15:43.598928 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 03:15:43.598939 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 03:15:43.598948 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 03:15:43.598958 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 03:15:43.598967 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 03:15:43.598977 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 03:15:43.598987 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 03:15:43.598996 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 03:15:43.599006 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 03:15:43.599015 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:15:43.599024 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:15:43.599036 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 03:15:43.599046 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 03:15:43.599056 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 03:15:43.599065 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:15:43.599075 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 03:15:43.599085 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:15:43.599094 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:15:43.599103 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 03:15:43.599112 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 03:15:43.599121 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 03:15:43.599130 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 03:15:43.599140 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:15:43.599149 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:15:43.599159 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 03:15:43.599169 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:15:43.599178 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:15:43.599187 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 03:15:43.599197 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 03:15:43.599206 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 03:15:43.599215 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:15:43.599225 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 03:15:43.599234 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:15:43.599243 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 03:15:43.599253 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 03:15:43.599262 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:15:43.599271 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:15:43.599281 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 03:15:43.599291 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 03:15:43.599300 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 03:15:43.599310 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 03:15:43.599319 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:15:43.599329 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 03:15:43.599338 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 03:15:43.599347 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 03:15:43.599358 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 03:15:43.599367 systemd[1]: Reached target machines.target - Containers. Dec 16 03:15:43.599376 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 03:15:43.599386 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:15:43.599395 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:15:43.599404 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 03:15:43.599413 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:15:43.599423 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:15:43.599432 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:15:43.599441 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 03:15:43.599450 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:15:43.599460 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 03:15:43.599469 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 03:15:43.599479 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 03:15:43.599488 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 03:15:43.599498 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 03:15:43.599507 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:15:43.599517 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:15:43.599527 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:15:43.599536 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:15:43.599545 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 03:15:43.599554 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 03:15:43.599562 kernel: ACPI: bus type drm_connector registered Dec 16 03:15:43.599570 kernel: fuse: init (API version 7.41) Dec 16 03:15:43.599580 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:15:43.599589 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:15:43.599599 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 03:15:43.599608 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 03:15:43.599618 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 03:15:43.599628 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 03:15:43.599637 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 03:15:43.599647 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 03:15:43.599668 systemd-journald[1231]: Collecting audit messages is enabled. Dec 16 03:15:43.599692 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 03:15:43.599702 systemd-journald[1231]: Journal started Dec 16 03:15:43.599720 systemd-journald[1231]: Runtime Journal (/run/log/journal/998ceb00ebae4628973b1c7d89969dc2) is 4.7M, max 38.1M, 33.4M free. Dec 16 03:15:43.605303 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:15:43.605334 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 03:15:43.350000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 03:15:43.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.514000 audit: BPF prog-id=14 op=UNLOAD Dec 16 03:15:43.514000 audit: BPF prog-id=13 op=UNLOAD Dec 16 03:15:43.514000 audit: BPF prog-id=15 op=LOAD Dec 16 03:15:43.514000 audit: BPF prog-id=16 op=LOAD Dec 16 03:15:43.514000 audit: BPF prog-id=17 op=LOAD Dec 16 03:15:43.596000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 03:15:43.596000 audit[1231]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffde9a44050 a2=4000 a3=0 items=0 ppid=1 pid=1231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:43.596000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 03:15:43.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.236748 systemd[1]: Queued start job for default target multi-user.target. Dec 16 03:15:43.241412 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 03:15:43.242005 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 03:15:43.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.615805 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 03:15:43.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.617931 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:15:43.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.619203 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:15:43.619398 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:15:43.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.620397 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:15:43.620570 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:15:43.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.621516 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:15:43.621674 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:15:43.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.622666 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 03:15:43.622891 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 03:15:43.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.623882 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:15:43.624065 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:15:43.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.625217 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:15:43.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.626318 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:15:43.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.627889 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 03:15:43.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.629021 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 03:15:43.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.635349 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:15:43.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.637952 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:15:43.638808 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 03:15:43.639496 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 03:15:43.639524 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:15:43.640910 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 03:15:43.642388 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:15:43.642482 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:15:43.643491 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 03:15:43.644898 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 03:15:43.646711 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:15:43.653868 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 03:15:43.655106 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:15:43.658133 systemd-journald[1231]: Time spent on flushing to /var/log/journal/998ceb00ebae4628973b1c7d89969dc2 is 29.829ms for 1296 entries. Dec 16 03:15:43.658133 systemd-journald[1231]: System Journal (/var/log/journal/998ceb00ebae4628973b1c7d89969dc2) is 8M, max 588.1M, 580.1M free. Dec 16 03:15:43.703225 systemd-journald[1231]: Received client request to flush runtime journal. Dec 16 03:15:43.703272 kernel: loop1: detected capacity change from 0 to 50784 Dec 16 03:15:43.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.658158 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:15:43.663498 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 03:15:43.665438 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:15:43.672919 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 03:15:43.673948 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 03:15:43.678002 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 03:15:43.703782 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:15:43.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.705564 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 03:15:43.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.711511 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Dec 16 03:15:43.711524 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Dec 16 03:15:43.717868 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:15:43.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.721512 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 03:15:43.722788 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 03:15:43.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.736770 kernel: loop2: detected capacity change from 0 to 8 Dec 16 03:15:43.752270 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 03:15:43.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.754787 kernel: loop3: detected capacity change from 0 to 111560 Dec 16 03:15:43.753000 audit: BPF prog-id=18 op=LOAD Dec 16 03:15:43.753000 audit: BPF prog-id=19 op=LOAD Dec 16 03:15:43.753000 audit: BPF prog-id=20 op=LOAD Dec 16 03:15:43.755405 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 03:15:43.757000 audit: BPF prog-id=21 op=LOAD Dec 16 03:15:43.758968 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:15:43.761913 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:15:43.764000 audit: BPF prog-id=22 op=LOAD Dec 16 03:15:43.764000 audit: BPF prog-id=23 op=LOAD Dec 16 03:15:43.764000 audit: BPF prog-id=24 op=LOAD Dec 16 03:15:43.765994 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 03:15:43.767000 audit: BPF prog-id=25 op=LOAD Dec 16 03:15:43.767000 audit: BPF prog-id=26 op=LOAD Dec 16 03:15:43.767000 audit: BPF prog-id=27 op=LOAD Dec 16 03:15:43.770149 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 03:15:43.787772 kernel: loop4: detected capacity change from 0 to 224512 Dec 16 03:15:43.793891 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Dec 16 03:15:43.793920 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Dec 16 03:15:43.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.803827 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:15:43.812698 systemd-nsresourced[1295]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 03:15:43.813867 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 03:15:43.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.824779 kernel: loop5: detected capacity change from 0 to 50784 Dec 16 03:15:43.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:43.832566 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 03:15:43.844498 kernel: loop6: detected capacity change from 0 to 8 Dec 16 03:15:43.849780 kernel: loop7: detected capacity change from 0 to 111560 Dec 16 03:15:43.872778 kernel: loop1: detected capacity change from 0 to 224512 Dec 16 03:15:43.893179 (sd-merge)[1307]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-hetzner.raw'. Dec 16 03:15:43.900667 (sd-merge)[1307]: Merged extensions into '/usr'. Dec 16 03:15:43.908884 systemd[1]: Reload requested from client PID 1275 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 03:15:43.908897 systemd[1]: Reloading... Dec 16 03:15:43.938628 systemd-oomd[1292]: No swap; memory pressure usage will be degraded Dec 16 03:15:43.950371 systemd-resolved[1293]: Positive Trust Anchors: Dec 16 03:15:43.950386 systemd-resolved[1293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:15:43.950389 systemd-resolved[1293]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:15:43.950414 systemd-resolved[1293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:15:43.972058 systemd-resolved[1293]: Using system hostname 'ci-4547-0-0-6-1137cb7bd3'. Dec 16 03:15:43.975798 zram_generator::config[1344]: No configuration found. Dec 16 03:15:44.142426 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 03:15:44.142697 systemd[1]: Reloading finished in 233 ms. Dec 16 03:15:44.168174 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 03:15:44.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.169036 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:15:44.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.169914 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 03:15:44.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.170953 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 03:15:44.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.174346 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:15:44.186792 systemd[1]: Starting ensure-sysext.service... Dec 16 03:15:44.188413 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:15:44.188000 audit: BPF prog-id=8 op=UNLOAD Dec 16 03:15:44.188000 audit: BPF prog-id=7 op=UNLOAD Dec 16 03:15:44.189000 audit: BPF prog-id=28 op=LOAD Dec 16 03:15:44.189000 audit: BPF prog-id=29 op=LOAD Dec 16 03:15:44.192873 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:15:44.196000 audit: BPF prog-id=30 op=LOAD Dec 16 03:15:44.196000 audit: BPF prog-id=25 op=UNLOAD Dec 16 03:15:44.196000 audit: BPF prog-id=31 op=LOAD Dec 16 03:15:44.196000 audit: BPF prog-id=32 op=LOAD Dec 16 03:15:44.196000 audit: BPF prog-id=26 op=UNLOAD Dec 16 03:15:44.196000 audit: BPF prog-id=27 op=UNLOAD Dec 16 03:15:44.197000 audit: BPF prog-id=33 op=LOAD Dec 16 03:15:44.197000 audit: BPF prog-id=15 op=UNLOAD Dec 16 03:15:44.197000 audit: BPF prog-id=34 op=LOAD Dec 16 03:15:44.197000 audit: BPF prog-id=35 op=LOAD Dec 16 03:15:44.197000 audit: BPF prog-id=16 op=UNLOAD Dec 16 03:15:44.197000 audit: BPF prog-id=17 op=UNLOAD Dec 16 03:15:44.197000 audit: BPF prog-id=36 op=LOAD Dec 16 03:15:44.197000 audit: BPF prog-id=22 op=UNLOAD Dec 16 03:15:44.198000 audit: BPF prog-id=37 op=LOAD Dec 16 03:15:44.198000 audit: BPF prog-id=38 op=LOAD Dec 16 03:15:44.198000 audit: BPF prog-id=23 op=UNLOAD Dec 16 03:15:44.198000 audit: BPF prog-id=24 op=UNLOAD Dec 16 03:15:44.199000 audit: BPF prog-id=39 op=LOAD Dec 16 03:15:44.199000 audit: BPF prog-id=18 op=UNLOAD Dec 16 03:15:44.199000 audit: BPF prog-id=40 op=LOAD Dec 16 03:15:44.199000 audit: BPF prog-id=41 op=LOAD Dec 16 03:15:44.200000 audit: BPF prog-id=19 op=UNLOAD Dec 16 03:15:44.200000 audit: BPF prog-id=20 op=UNLOAD Dec 16 03:15:44.201000 audit: BPF prog-id=42 op=LOAD Dec 16 03:15:44.201000 audit: BPF prog-id=21 op=UNLOAD Dec 16 03:15:44.212823 systemd[1]: Reload requested from client PID 1388 ('systemctl') (unit ensure-sysext.service)... Dec 16 03:15:44.212838 systemd[1]: Reloading... Dec 16 03:15:44.219975 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 03:15:44.220009 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 03:15:44.220228 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 03:15:44.221883 systemd-udevd[1390]: Using default interface naming scheme 'v257'. Dec 16 03:15:44.223082 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Dec 16 03:15:44.223198 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Dec 16 03:15:44.231061 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:15:44.231169 systemd-tmpfiles[1389]: Skipping /boot Dec 16 03:15:44.239851 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:15:44.240159 systemd-tmpfiles[1389]: Skipping /boot Dec 16 03:15:44.269791 zram_generator::config[1425]: No configuration found. Dec 16 03:15:44.393780 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 03:15:44.412790 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 03:15:44.417781 kernel: ACPI: button: Power Button [PWRF] Dec 16 03:15:44.461635 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 16 03:15:44.461702 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 16 03:15:44.468788 kernel: Console: switching to colour dummy device 80x25 Dec 16 03:15:44.470972 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 16 03:15:44.471008 kernel: [drm] features: -context_init Dec 16 03:15:44.475774 kernel: [drm] number of scanouts: 1 Dec 16 03:15:44.475811 kernel: [drm] number of cap sets: 0 Dec 16 03:15:44.497966 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Dec 16 03:15:44.531765 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 16 03:15:44.531868 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 03:15:44.533768 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 03:15:44.533961 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 03:15:44.560284 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 03:15:44.568551 kernel: EDAC MC: Ver: 3.0.0 Dec 16 03:15:44.599334 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 16 03:15:44.602670 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 03:15:44.611235 systemd[1]: Reloading finished in 398 ms. Dec 16 03:15:44.621986 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:15:44.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.625000 audit: BPF prog-id=43 op=LOAD Dec 16 03:15:44.625000 audit: BPF prog-id=33 op=UNLOAD Dec 16 03:15:44.625000 audit: BPF prog-id=44 op=LOAD Dec 16 03:15:44.625000 audit: BPF prog-id=45 op=LOAD Dec 16 03:15:44.625000 audit: BPF prog-id=34 op=UNLOAD Dec 16 03:15:44.625000 audit: BPF prog-id=35 op=UNLOAD Dec 16 03:15:44.626000 audit: BPF prog-id=46 op=LOAD Dec 16 03:15:44.626000 audit: BPF prog-id=39 op=UNLOAD Dec 16 03:15:44.626000 audit: BPF prog-id=47 op=LOAD Dec 16 03:15:44.626000 audit: BPF prog-id=48 op=LOAD Dec 16 03:15:44.626000 audit: BPF prog-id=40 op=UNLOAD Dec 16 03:15:44.626000 audit: BPF prog-id=41 op=UNLOAD Dec 16 03:15:44.626000 audit: BPF prog-id=49 op=LOAD Dec 16 03:15:44.626000 audit: BPF prog-id=50 op=LOAD Dec 16 03:15:44.626000 audit: BPF prog-id=28 op=UNLOAD Dec 16 03:15:44.626000 audit: BPF prog-id=29 op=UNLOAD Dec 16 03:15:44.627000 audit: BPF prog-id=51 op=LOAD Dec 16 03:15:44.627000 audit: BPF prog-id=30 op=UNLOAD Dec 16 03:15:44.627000 audit: BPF prog-id=52 op=LOAD Dec 16 03:15:44.627000 audit: BPF prog-id=53 op=LOAD Dec 16 03:15:44.627000 audit: BPF prog-id=31 op=UNLOAD Dec 16 03:15:44.627000 audit: BPF prog-id=32 op=UNLOAD Dec 16 03:15:44.628000 audit: BPF prog-id=54 op=LOAD Dec 16 03:15:44.628000 audit: BPF prog-id=42 op=UNLOAD Dec 16 03:15:44.628000 audit: BPF prog-id=55 op=LOAD Dec 16 03:15:44.628000 audit: BPF prog-id=36 op=UNLOAD Dec 16 03:15:44.628000 audit: BPF prog-id=56 op=LOAD Dec 16 03:15:44.628000 audit: BPF prog-id=57 op=LOAD Dec 16 03:15:44.628000 audit: BPF prog-id=37 op=UNLOAD Dec 16 03:15:44.628000 audit: BPF prog-id=38 op=UNLOAD Dec 16 03:15:44.639040 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:15:44.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.663414 systemd[1]: Finished ensure-sysext.service. Dec 16 03:15:44.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.681222 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 16 03:15:44.684539 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:15:44.685419 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:15:44.695039 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 03:15:44.695200 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:15:44.696312 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 03:15:44.698358 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:15:44.699837 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:15:44.701910 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:15:44.708512 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 03:15:44.713976 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:15:44.714344 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:15:44.714419 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:15:44.716812 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 03:15:44.723714 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 03:15:44.724879 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:15:44.726219 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 03:15:44.727000 audit: BPF prog-id=58 op=LOAD Dec 16 03:15:44.728924 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:15:44.731000 audit: BPF prog-id=59 op=LOAD Dec 16 03:15:44.732867 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 03:15:44.736900 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 03:15:44.747622 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:15:44.747970 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:15:44.749323 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 03:15:44.749675 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 03:15:44.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.751393 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:15:44.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.751603 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:15:44.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.752529 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:15:44.752810 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:15:44.756438 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:15:44.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.756613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:15:44.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.757613 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 03:15:44.758000 audit[1543]: SYSTEM_BOOT pid=1543 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.757985 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 03:15:44.760545 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:15:44.766891 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:15:44.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.772107 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 03:15:44.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.787501 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 03:15:44.791641 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 03:15:44.792216 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:15:44.792336 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:15:44.796283 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 03:15:44.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.798600 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 03:15:44.802937 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 03:15:44.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:44.813360 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 03:15:44.817000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 03:15:44.817000 audit[1570]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3d066050 a2=420 a3=0 items=0 ppid=1517 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:44.817000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:15:44.818432 augenrules[1570]: No rules Dec 16 03:15:44.821260 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:15:44.821453 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:15:44.884133 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:15:44.893581 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 03:15:44.894318 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 03:15:44.895579 systemd-networkd[1541]: lo: Link UP Dec 16 03:15:44.895585 systemd-networkd[1541]: lo: Gained carrier Dec 16 03:15:44.899512 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:15:44.900544 systemd[1]: Reached target network.target - Network. Dec 16 03:15:44.900891 systemd-networkd[1541]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:15:44.900898 systemd-networkd[1541]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:15:44.902464 systemd-networkd[1541]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:15:44.902521 systemd-networkd[1541]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:15:44.902968 systemd-networkd[1541]: eth1: Link UP Dec 16 03:15:44.903152 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 03:15:44.904068 systemd-networkd[1541]: eth1: Gained carrier Dec 16 03:15:44.904080 systemd-networkd[1541]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:15:44.906885 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 03:15:44.908088 systemd-networkd[1541]: eth0: Link UP Dec 16 03:15:44.911029 systemd-networkd[1541]: eth0: Gained carrier Dec 16 03:15:44.911052 systemd-networkd[1541]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:15:44.912117 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 03:15:44.912853 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 03:15:44.922633 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 03:15:44.936803 systemd-networkd[1541]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Dec 16 03:15:44.938125 systemd-timesyncd[1542]: Network configuration changed, trying to establish connection. Dec 16 03:15:44.963920 systemd-networkd[1541]: eth0: DHCPv4 address 65.108.246.88/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 16 03:15:44.964612 systemd-timesyncd[1542]: Network configuration changed, trying to establish connection. Dec 16 03:15:44.965649 systemd-timesyncd[1542]: Network configuration changed, trying to establish connection. Dec 16 03:15:45.395844 ldconfig[1531]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 03:15:45.403375 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 03:15:45.408169 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 03:15:45.431158 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 03:15:45.434339 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:15:45.435482 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 03:15:45.436411 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 03:15:45.437352 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 03:15:45.438534 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 03:15:45.439678 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 03:15:45.440907 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 03:15:45.442196 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 03:15:45.443179 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 03:15:45.444173 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 03:15:45.444352 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:15:45.445480 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:15:45.449571 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 03:15:45.453473 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 03:15:45.458812 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 03:15:45.459413 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 03:15:45.461220 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 03:15:45.473517 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 03:15:45.474931 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 03:15:45.478095 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 03:15:45.479903 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:15:45.480540 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:15:45.482363 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:15:45.482434 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:15:45.484266 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 03:15:45.489721 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 03:15:45.493870 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 03:15:45.499360 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 03:15:45.504487 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 03:15:45.511108 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 03:15:45.513005 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 03:15:45.515678 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 03:15:45.522967 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 03:15:45.530035 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 03:15:45.536673 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 16 03:15:45.541098 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 03:15:45.542455 jq[1594]: false Dec 16 03:15:45.552975 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 03:15:45.565119 coreos-metadata[1591]: Dec 16 03:15:45.565 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 16 03:15:45.566926 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 03:15:45.567477 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 03:15:45.567916 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 03:15:45.575824 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Refreshing passwd entry cache Dec 16 03:15:45.575824 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Failure getting users, quitting Dec 16 03:15:45.575824 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:15:45.575824 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Refreshing group entry cache Dec 16 03:15:45.575824 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Failure getting groups, quitting Dec 16 03:15:45.575824 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:15:45.576104 coreos-metadata[1591]: Dec 16 03:15:45.575 INFO Fetch successful Dec 16 03:15:45.576104 coreos-metadata[1591]: Dec 16 03:15:45.575 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 16 03:15:45.576104 coreos-metadata[1591]: Dec 16 03:15:45.575 INFO Fetch successful Dec 16 03:15:45.570864 oslogin_cache_refresh[1598]: Refreshing passwd entry cache Dec 16 03:15:45.572851 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 03:15:45.573425 oslogin_cache_refresh[1598]: Failure getting users, quitting Dec 16 03:15:45.573444 oslogin_cache_refresh[1598]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:15:45.573491 oslogin_cache_refresh[1598]: Refreshing group entry cache Dec 16 03:15:45.574247 oslogin_cache_refresh[1598]: Failure getting groups, quitting Dec 16 03:15:45.574255 oslogin_cache_refresh[1598]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:15:45.581771 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 03:15:45.590293 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 03:15:45.592527 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 03:15:45.593443 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 03:15:45.593949 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 03:15:45.594843 extend-filesystems[1595]: Found /dev/sda6 Dec 16 03:15:45.594367 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 03:15:45.600191 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 03:15:45.601017 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 03:15:45.603035 extend-filesystems[1595]: Found /dev/sda9 Dec 16 03:15:45.608071 extend-filesystems[1595]: Checking size of /dev/sda9 Dec 16 03:15:45.604935 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 03:15:45.614078 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 03:15:45.623847 jq[1608]: true Dec 16 03:15:45.623986 update_engine[1605]: I20251216 03:15:45.622283 1605 main.cc:92] Flatcar Update Engine starting Dec 16 03:15:45.638980 extend-filesystems[1595]: Resized partition /dev/sda9 Dec 16 03:15:45.644450 extend-filesystems[1650]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 03:15:45.651960 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 8410107 blocks Dec 16 03:15:45.656769 tar[1622]: linux-amd64/LICENSE Dec 16 03:15:45.656769 tar[1622]: linux-amd64/helm Dec 16 03:15:45.669909 jq[1634]: true Dec 16 03:15:45.695409 dbus-daemon[1592]: [system] SELinux support is enabled Dec 16 03:15:45.707995 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 03:15:45.714012 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 03:15:45.717070 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 03:15:45.717220 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 03:15:45.717297 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 03:15:45.719466 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 03:15:45.722767 update_engine[1605]: I20251216 03:15:45.720463 1605 update_check_scheduler.cc:74] Next update check in 7m34s Dec 16 03:15:45.720873 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 03:15:45.721359 systemd[1]: Started update-engine.service - Update Engine. Dec 16 03:15:45.748443 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 03:15:45.766373 systemd-logind[1604]: New seat seat0. Dec 16 03:15:45.788072 systemd-logind[1604]: Watching system buttons on /dev/input/event3 (Power Button) Dec 16 03:15:45.788092 systemd-logind[1604]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 03:15:45.791415 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 03:15:45.845139 bash[1675]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:15:45.845840 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 03:15:45.854979 systemd[1]: Starting sshkeys.service... Dec 16 03:15:45.906693 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 03:15:45.913945 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 03:15:45.952122 kernel: EXT4-fs (sda9): resized filesystem to 8410107 Dec 16 03:15:45.960866 coreos-metadata[1683]: Dec 16 03:15:45.960 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 16 03:15:45.972988 coreos-metadata[1683]: Dec 16 03:15:45.965 INFO Fetch successful Dec 16 03:15:45.974446 unknown[1683]: wrote ssh authorized keys file for user: core Dec 16 03:15:45.975509 extend-filesystems[1650]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 03:15:45.975509 extend-filesystems[1650]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 16 03:15:45.975509 extend-filesystems[1650]: The filesystem on /dev/sda9 is now 8410107 (4k) blocks long. Dec 16 03:15:45.995566 extend-filesystems[1595]: Resized filesystem in /dev/sda9 Dec 16 03:15:45.996150 containerd[1630]: time="2025-12-16T03:15:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 03:15:45.996150 containerd[1630]: time="2025-12-16T03:15:45.977872858Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 03:15:45.996150 containerd[1630]: time="2025-12-16T03:15:45.987428780Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.613µs" Dec 16 03:15:45.996150 containerd[1630]: time="2025-12-16T03:15:45.987448877Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 03:15:45.996150 containerd[1630]: time="2025-12-16T03:15:45.987474546Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 03:15:45.996150 containerd[1630]: time="2025-12-16T03:15:45.987483663Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 03:15:45.996150 containerd[1630]: time="2025-12-16T03:15:45.987626301Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 03:15:45.996150 containerd[1630]: time="2025-12-16T03:15:45.987640908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:15:45.996150 containerd[1630]: time="2025-12-16T03:15:45.987706852Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:15:45.996150 containerd[1630]: time="2025-12-16T03:15:45.987717461Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:15:45.996150 containerd[1630]: time="2025-12-16T03:15:45.987974123Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:15:45.996150 containerd[1630]: time="2025-12-16T03:15:45.987988430Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:15:45.983192 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 03:15:46.000266 containerd[1630]: time="2025-12-16T03:15:45.987996645Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:15:46.000266 containerd[1630]: time="2025-12-16T03:15:45.988022513Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:15:46.000266 containerd[1630]: time="2025-12-16T03:15:45.988160352Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:15:46.000266 containerd[1630]: time="2025-12-16T03:15:45.988171743Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 03:15:46.000266 containerd[1630]: time="2025-12-16T03:15:45.988252515Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 03:15:46.000266 containerd[1630]: time="2025-12-16T03:15:45.988456167Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:15:46.000266 containerd[1630]: time="2025-12-16T03:15:45.988499909Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:15:46.000266 containerd[1630]: time="2025-12-16T03:15:45.988508375Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 03:15:46.000266 containerd[1630]: time="2025-12-16T03:15:45.988558549Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 03:15:46.000266 containerd[1630]: time="2025-12-16T03:15:45.988852641Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 03:15:46.000266 containerd[1630]: time="2025-12-16T03:15:45.988919355Z" level=info msg="metadata content store policy set" policy=shared Dec 16 03:15:45.983411 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 03:15:45.996159 locksmithd[1667]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 03:15:46.003499 containerd[1630]: time="2025-12-16T03:15:46.002939961Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 03:15:46.003499 containerd[1630]: time="2025-12-16T03:15:46.002980137Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:15:46.003499 containerd[1630]: time="2025-12-16T03:15:46.003042935Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:15:46.003499 containerd[1630]: time="2025-12-16T03:15:46.003054046Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 03:15:46.003499 containerd[1630]: time="2025-12-16T03:15:46.003064685Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 03:15:46.003499 containerd[1630]: time="2025-12-16T03:15:46.003073652Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 03:15:46.003499 containerd[1630]: time="2025-12-16T03:15:46.003082058Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 03:15:46.003499 containerd[1630]: time="2025-12-16T03:15:46.003089041Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 03:15:46.003499 containerd[1630]: time="2025-12-16T03:15:46.003097277Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 03:15:46.003499 containerd[1630]: time="2025-12-16T03:15:46.003112455Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 03:15:46.003499 containerd[1630]: time="2025-12-16T03:15:46.003120520Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 03:15:46.003499 containerd[1630]: time="2025-12-16T03:15:46.003128345Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 03:15:46.003499 containerd[1630]: time="2025-12-16T03:15:46.003138153Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 03:15:46.003499 containerd[1630]: time="2025-12-16T03:15:46.003149644Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 03:15:46.003703 containerd[1630]: time="2025-12-16T03:15:46.003227871Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 03:15:46.003703 containerd[1630]: time="2025-12-16T03:15:46.003245575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 03:15:46.003703 containerd[1630]: time="2025-12-16T03:15:46.003258469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 03:15:46.003703 containerd[1630]: time="2025-12-16T03:15:46.003272675Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 03:15:46.003703 containerd[1630]: time="2025-12-16T03:15:46.003281572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 03:15:46.003703 containerd[1630]: time="2025-12-16T03:15:46.003289457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 03:15:46.003703 containerd[1630]: time="2025-12-16T03:15:46.003298053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 03:15:46.003703 containerd[1630]: time="2025-12-16T03:15:46.003308021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 03:15:46.003703 containerd[1630]: time="2025-12-16T03:15:46.003315987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 03:15:46.003703 containerd[1630]: time="2025-12-16T03:15:46.003323701Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 03:15:46.003703 containerd[1630]: time="2025-12-16T03:15:46.003331616Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 03:15:46.003703 containerd[1630]: time="2025-12-16T03:15:46.003350962Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 03:15:46.003703 containerd[1630]: time="2025-12-16T03:15:46.003391629Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 03:15:46.003703 containerd[1630]: time="2025-12-16T03:15:46.003401036Z" level=info msg="Start snapshots syncer" Dec 16 03:15:46.003703 containerd[1630]: time="2025-12-16T03:15:46.003413660Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 03:15:46.003944 containerd[1630]: time="2025-12-16T03:15:46.003605430Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 03:15:46.003944 containerd[1630]: time="2025-12-16T03:15:46.003641076Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 03:15:46.004036 containerd[1630]: time="2025-12-16T03:15:46.003674770Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 03:15:46.004036 containerd[1630]: time="2025-12-16T03:15:46.003769597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 03:15:46.004036 containerd[1630]: time="2025-12-16T03:15:46.003788293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 03:15:46.004036 containerd[1630]: time="2025-12-16T03:15:46.003796959Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 03:15:46.004036 containerd[1630]: time="2025-12-16T03:15:46.003805745Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 03:15:46.004036 containerd[1630]: time="2025-12-16T03:15:46.003819050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 03:15:46.004036 containerd[1630]: time="2025-12-16T03:15:46.003830191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 03:15:46.004036 containerd[1630]: time="2025-12-16T03:15:46.003838387Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 03:15:46.004036 containerd[1630]: time="2025-12-16T03:15:46.003851301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 03:15:46.004036 containerd[1630]: time="2025-12-16T03:15:46.003862461Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 03:15:46.004036 containerd[1630]: time="2025-12-16T03:15:46.003883090Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:15:46.004036 containerd[1630]: time="2025-12-16T03:15:46.003892187Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:15:46.004036 containerd[1630]: time="2025-12-16T03:15:46.003898028Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:15:46.004210 containerd[1630]: time="2025-12-16T03:15:46.003905593Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:15:46.004210 containerd[1630]: time="2025-12-16T03:15:46.003911183Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 03:15:46.004210 containerd[1630]: time="2025-12-16T03:15:46.003923686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 03:15:46.004210 containerd[1630]: time="2025-12-16T03:15:46.003931571Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 03:15:46.004210 containerd[1630]: time="2025-12-16T03:15:46.003943273Z" level=info msg="runtime interface created" Dec 16 03:15:46.004210 containerd[1630]: time="2025-12-16T03:15:46.003948433Z" level=info msg="created NRI interface" Dec 16 03:15:46.004210 containerd[1630]: time="2025-12-16T03:15:46.003954164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 03:15:46.004210 containerd[1630]: time="2025-12-16T03:15:46.003964834Z" level=info msg="Connect containerd service" Dec 16 03:15:46.004210 containerd[1630]: time="2025-12-16T03:15:46.003978610Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 03:15:46.006410 containerd[1630]: time="2025-12-16T03:15:46.006388309Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 03:15:46.023493 update-ssh-keys[1694]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:15:46.024217 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 03:15:46.039978 systemd[1]: Finished sshkeys.service. Dec 16 03:15:46.173204 containerd[1630]: time="2025-12-16T03:15:46.173111760Z" level=info msg="Start subscribing containerd event" Dec 16 03:15:46.173337 containerd[1630]: time="2025-12-16T03:15:46.173324649Z" level=info msg="Start recovering state" Dec 16 03:15:46.174969 containerd[1630]: time="2025-12-16T03:15:46.173916290Z" level=info msg="Start event monitor" Dec 16 03:15:46.174969 containerd[1630]: time="2025-12-16T03:15:46.174813241Z" level=info msg="Start cni network conf syncer for default" Dec 16 03:15:46.174969 containerd[1630]: time="2025-12-16T03:15:46.174822298Z" level=info msg="Start streaming server" Dec 16 03:15:46.174969 containerd[1630]: time="2025-12-16T03:15:46.174829582Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 03:15:46.174969 containerd[1630]: time="2025-12-16T03:15:46.174836064Z" level=info msg="runtime interface starting up..." Dec 16 03:15:46.174969 containerd[1630]: time="2025-12-16T03:15:46.174841835Z" level=info msg="starting plugins..." Dec 16 03:15:46.174969 containerd[1630]: time="2025-12-16T03:15:46.174854789Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 03:15:46.175276 containerd[1630]: time="2025-12-16T03:15:46.175163048Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 03:15:46.175463 containerd[1630]: time="2025-12-16T03:15:46.175451018Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 03:15:46.176004 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 03:15:46.183086 containerd[1630]: time="2025-12-16T03:15:46.182940504Z" level=info msg="containerd successfully booted in 0.205914s" Dec 16 03:15:46.233934 sshd_keygen[1623]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 03:15:46.246392 tar[1622]: linux-amd64/README.md Dec 16 03:15:46.250988 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 03:15:46.255825 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 03:15:46.257330 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 03:15:46.266179 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 03:15:46.266357 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 03:15:46.267814 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 03:15:46.280350 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 03:15:46.284627 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 03:15:46.289132 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 03:15:46.290283 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 03:15:46.495966 systemd-networkd[1541]: eth0: Gained IPv6LL Dec 16 03:15:46.496557 systemd-timesyncd[1542]: Network configuration changed, trying to establish connection. Dec 16 03:15:46.498592 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 03:15:46.499981 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 03:15:46.511635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:15:46.513841 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 03:15:46.543443 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 03:15:46.688062 systemd-networkd[1541]: eth1: Gained IPv6LL Dec 16 03:15:46.688966 systemd-timesyncd[1542]: Network configuration changed, trying to establish connection. Dec 16 03:15:47.974962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:15:47.977563 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 03:15:47.981282 systemd[1]: Startup finished in 3.788s (kernel) + 9.719s (initrd) + 5.253s (userspace) = 18.761s. Dec 16 03:15:47.993263 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:15:48.833146 kubelet[1747]: E1216 03:15:48.833050 1747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:15:48.836585 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:15:48.836860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:15:48.837455 systemd[1]: kubelet.service: Consumed 1.503s CPU time, 267M memory peak. Dec 16 03:15:51.834609 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 03:15:51.837293 systemd[1]: Started sshd@0-65.108.246.88:22-139.178.89.65:52622.service - OpenSSH per-connection server daemon (139.178.89.65:52622). Dec 16 03:15:52.809277 sshd[1759]: Accepted publickey for core from 139.178.89.65 port 52622 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:15:52.811827 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:52.820895 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 03:15:52.822931 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 03:15:52.830127 systemd-logind[1604]: New session 1 of user core. Dec 16 03:15:52.840558 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 03:15:52.844329 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 03:15:52.861380 (systemd)[1765]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:52.865472 systemd-logind[1604]: New session 2 of user core. Dec 16 03:15:53.043899 systemd[1765]: Queued start job for default target default.target. Dec 16 03:15:53.055976 systemd[1765]: Created slice app.slice - User Application Slice. Dec 16 03:15:53.056005 systemd[1765]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 03:15:53.056017 systemd[1765]: Reached target paths.target - Paths. Dec 16 03:15:53.056153 systemd[1765]: Reached target timers.target - Timers. Dec 16 03:15:53.057155 systemd[1765]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 03:15:53.057710 systemd[1765]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 03:15:53.068164 systemd[1765]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 03:15:53.069257 systemd[1765]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 03:15:53.069362 systemd[1765]: Reached target sockets.target - Sockets. Dec 16 03:15:53.069490 systemd[1765]: Reached target basic.target - Basic System. Dec 16 03:15:53.069633 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 03:15:53.069830 systemd[1765]: Reached target default.target - Main User Target. Dec 16 03:15:53.069873 systemd[1765]: Startup finished in 195ms. Dec 16 03:15:53.074990 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 03:15:53.613325 systemd[1]: Started sshd@1-65.108.246.88:22-139.178.89.65:52632.service - OpenSSH per-connection server daemon (139.178.89.65:52632). Dec 16 03:15:54.538552 sshd[1779]: Accepted publickey for core from 139.178.89.65 port 52632 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:15:54.540666 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:54.549846 systemd-logind[1604]: New session 3 of user core. Dec 16 03:15:54.557997 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 03:15:55.067960 sshd[1783]: Connection closed by 139.178.89.65 port 52632 Dec 16 03:15:55.068519 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:55.072014 systemd[1]: sshd@1-65.108.246.88:22-139.178.89.65:52632.service: Deactivated successfully. Dec 16 03:15:55.073500 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 03:15:55.074427 systemd-logind[1604]: Session 3 logged out. Waiting for processes to exit. Dec 16 03:15:55.075669 systemd-logind[1604]: Removed session 3. Dec 16 03:15:55.219013 systemd[1]: Started sshd@2-65.108.246.88:22-139.178.89.65:52648.service - OpenSSH per-connection server daemon (139.178.89.65:52648). Dec 16 03:15:56.060239 sshd[1789]: Accepted publickey for core from 139.178.89.65 port 52648 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:15:56.062533 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:56.072847 systemd-logind[1604]: New session 4 of user core. Dec 16 03:15:56.081038 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 03:15:56.539217 sshd[1793]: Connection closed by 139.178.89.65 port 52648 Dec 16 03:15:56.540144 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:56.547738 systemd-logind[1604]: Session 4 logged out. Waiting for processes to exit. Dec 16 03:15:56.548077 systemd[1]: sshd@2-65.108.246.88:22-139.178.89.65:52648.service: Deactivated successfully. Dec 16 03:15:56.551394 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 03:15:56.554239 systemd-logind[1604]: Removed session 4. Dec 16 03:15:56.714077 systemd[1]: Started sshd@3-65.108.246.88:22-139.178.89.65:52658.service - OpenSSH per-connection server daemon (139.178.89.65:52658). Dec 16 03:15:57.577816 sshd[1799]: Accepted publickey for core from 139.178.89.65 port 52658 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:15:57.579209 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:57.584217 systemd-logind[1604]: New session 5 of user core. Dec 16 03:15:57.594952 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 03:15:58.061851 sshd[1803]: Connection closed by 139.178.89.65 port 52658 Dec 16 03:15:58.062879 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:58.067960 systemd[1]: sshd@3-65.108.246.88:22-139.178.89.65:52658.service: Deactivated successfully. Dec 16 03:15:58.070567 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 03:15:58.073479 systemd-logind[1604]: Session 5 logged out. Waiting for processes to exit. Dec 16 03:15:58.075249 systemd-logind[1604]: Removed session 5. Dec 16 03:15:58.238694 systemd[1]: Started sshd@4-65.108.246.88:22-139.178.89.65:52674.service - OpenSSH per-connection server daemon (139.178.89.65:52674). Dec 16 03:15:58.937594 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 03:15:58.940730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:15:59.090550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:15:59.093162 (kubelet)[1820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:15:59.110539 sshd[1809]: Accepted publickey for core from 139.178.89.65 port 52674 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:15:59.113577 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:59.125525 systemd-logind[1604]: New session 6 of user core. Dec 16 03:15:59.131207 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 03:15:59.134963 kubelet[1820]: E1216 03:15:59.134096 1820 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:15:59.140910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:15:59.141096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:15:59.142034 systemd[1]: kubelet.service: Consumed 158ms CPU time, 110.5M memory peak. Dec 16 03:15:59.455978 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 03:15:59.456431 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:15:59.470913 sudo[1829]: pam_unix(sudo:session): session closed for user root Dec 16 03:15:59.630322 sshd[1827]: Connection closed by 139.178.89.65 port 52674 Dec 16 03:15:59.631514 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:59.637434 systemd[1]: sshd@4-65.108.246.88:22-139.178.89.65:52674.service: Deactivated successfully. Dec 16 03:15:59.640336 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 03:15:59.642468 systemd-logind[1604]: Session 6 logged out. Waiting for processes to exit. Dec 16 03:15:59.644611 systemd-logind[1604]: Removed session 6. Dec 16 03:15:59.838559 systemd[1]: Started sshd@5-65.108.246.88:22-139.178.89.65:52678.service - OpenSSH per-connection server daemon (139.178.89.65:52678). Dec 16 03:16:00.800183 sshd[1836]: Accepted publickey for core from 139.178.89.65 port 52678 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:16:00.802581 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:00.810633 systemd-logind[1604]: New session 7 of user core. Dec 16 03:16:00.822133 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 03:16:01.161548 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 03:16:01.162150 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:16:01.164694 sudo[1842]: pam_unix(sudo:session): session closed for user root Dec 16 03:16:01.170513 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 03:16:01.170900 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:16:01.178320 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:16:01.211000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:16:01.213992 kernel: kauditd_printk_skb: 190 callbacks suppressed Dec 16 03:16:01.214056 kernel: audit: type=1305 audit(1765854961.211:235): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:16:01.214081 augenrules[1866]: No rules Dec 16 03:16:01.215156 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:16:01.215415 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:16:01.211000 audit[1866]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe77fdee40 a2=420 a3=0 items=0 ppid=1847 pid=1866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:01.221110 sudo[1841]: pam_unix(sudo:session): session closed for user root Dec 16 03:16:01.231065 kernel: audit: type=1300 audit(1765854961.211:235): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe77fdee40 a2=420 a3=0 items=0 ppid=1847 pid=1866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:01.211000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:16:01.237814 kernel: audit: type=1327 audit(1765854961.211:235): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:16:01.237909 kernel: audit: type=1130 audit(1765854961.215:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.248810 kernel: audit: type=1131 audit(1765854961.215:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.220000 audit[1841]: USER_END pid=1841 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.264416 kernel: audit: type=1106 audit(1765854961.220:238): pid=1841 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.264494 kernel: audit: type=1104 audit(1765854961.220:239): pid=1841 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.220000 audit[1841]: CRED_DISP pid=1841 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.398257 sshd[1840]: Connection closed by 139.178.89.65 port 52678 Dec 16 03:16:01.399000 sshd-session[1836]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:01.399000 audit[1836]: USER_END pid=1836 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:16:01.404143 systemd[1]: sshd@5-65.108.246.88:22-139.178.89.65:52678.service: Deactivated successfully. Dec 16 03:16:01.406351 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 03:16:01.407401 systemd-logind[1604]: Session 7 logged out. Waiting for processes to exit. Dec 16 03:16:01.400000 audit[1836]: CRED_DISP pid=1836 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:16:01.409782 systemd-logind[1604]: Removed session 7. Dec 16 03:16:01.414952 kernel: audit: type=1106 audit(1765854961.399:240): pid=1836 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:16:01.415011 kernel: audit: type=1104 audit(1765854961.400:241): pid=1836 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:16:01.415030 kernel: audit: type=1131 audit(1765854961.403:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-65.108.246.88:22-139.178.89.65:52678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-65.108.246.88:22-139.178.89.65:52678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-65.108.246.88:22-139.178.89.65:52990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.552306 systemd[1]: Started sshd@6-65.108.246.88:22-139.178.89.65:52990.service - OpenSSH per-connection server daemon (139.178.89.65:52990). Dec 16 03:16:02.405000 audit[1875]: USER_ACCT pid=1875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:16:02.406983 sshd[1875]: Accepted publickey for core from 139.178.89.65 port 52990 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:16:02.407000 audit[1875]: CRED_ACQ pid=1875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:16:02.407000 audit[1875]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe308b5440 a2=3 a3=0 items=0 ppid=1 pid=1875 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:02.407000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:02.408580 sshd-session[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:02.414387 systemd-logind[1604]: New session 8 of user core. Dec 16 03:16:02.416936 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 03:16:02.418000 audit[1875]: USER_START pid=1875 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:16:02.420000 audit[1879]: CRED_ACQ pid=1879 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:16:02.736000 audit[1880]: USER_ACCT pid=1880 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:02.737480 sudo[1880]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 03:16:02.737990 sudo[1880]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:16:02.736000 audit[1880]: CRED_REFR pid=1880 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:02.737000 audit[1880]: USER_START pid=1880 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:03.312838 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 03:16:03.335099 (dockerd)[1898]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 03:16:03.746560 dockerd[1898]: time="2025-12-16T03:16:03.746466205Z" level=info msg="Starting up" Dec 16 03:16:03.747959 dockerd[1898]: time="2025-12-16T03:16:03.747898611Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 03:16:03.763928 dockerd[1898]: time="2025-12-16T03:16:03.763859046Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 03:16:03.826208 dockerd[1898]: time="2025-12-16T03:16:03.826148320Z" level=info msg="Loading containers: start." Dec 16 03:16:03.843872 kernel: Initializing XFRM netlink socket Dec 16 03:16:03.919000 audit[1945]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:03.919000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe09fdca20 a2=0 a3=0 items=0 ppid=1898 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.919000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:16:03.921000 audit[1947]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:03.921000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe369c50a0 a2=0 a3=0 items=0 ppid=1898 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.921000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:16:03.923000 audit[1949]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:03.923000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe11dcaab0 a2=0 a3=0 items=0 ppid=1898 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.923000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:16:03.925000 audit[1951]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:03.925000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfdaec200 a2=0 a3=0 items=0 ppid=1898 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.925000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:16:03.927000 audit[1953]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:03.927000 audit[1953]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd9ce44810 a2=0 a3=0 items=0 ppid=1898 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.927000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:16:03.929000 audit[1955]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:03.929000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdee4b1ef0 a2=0 a3=0 items=0 ppid=1898 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.929000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:16:03.931000 audit[1957]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:03.931000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffe17153f0 a2=0 a3=0 items=0 ppid=1898 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.931000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:16:03.933000 audit[1959]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:03.933000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffef4266b00 a2=0 a3=0 items=0 ppid=1898 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.933000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:16:03.968000 audit[1962]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:03.968000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffe5547edd0 a2=0 a3=0 items=0 ppid=1898 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.968000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 03:16:03.971000 audit[1964]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:03.971000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff07076190 a2=0 a3=0 items=0 ppid=1898 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.971000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:16:03.974000 audit[1966]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:03.974000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe609924d0 a2=0 a3=0 items=0 ppid=1898 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.974000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:16:03.976000 audit[1968]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:03.976000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe8cff4b00 a2=0 a3=0 items=0 ppid=1898 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.976000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:16:03.978000 audit[1970]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:03.978000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe8eb1bb00 a2=0 a3=0 items=0 ppid=1898 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.978000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:16:04.023000 audit[2000]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.023000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffde81f2f80 a2=0 a3=0 items=0 ppid=1898 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.023000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:16:04.026000 audit[2002]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.026000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff130f8620 a2=0 a3=0 items=0 ppid=1898 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.026000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:16:04.029000 audit[2004]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.029000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe668a80e0 a2=0 a3=0 items=0 ppid=1898 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.029000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:16:04.031000 audit[2006]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.031000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2c4f4470 a2=0 a3=0 items=0 ppid=1898 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.031000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:16:04.033000 audit[2008]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.033000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe5b2d3790 a2=0 a3=0 items=0 ppid=1898 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.033000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:16:04.036000 audit[2010]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.036000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffee55e5610 a2=0 a3=0 items=0 ppid=1898 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.036000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:16:04.038000 audit[2012]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.038000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc444b4d30 a2=0 a3=0 items=0 ppid=1898 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.038000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:16:04.041000 audit[2014]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.041000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc160f3c20 a2=0 a3=0 items=0 ppid=1898 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.041000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:16:04.044000 audit[2016]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.044000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff7f013b70 a2=0 a3=0 items=0 ppid=1898 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.044000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 03:16:04.046000 audit[2018]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.046000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe5a038e20 a2=0 a3=0 items=0 ppid=1898 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.046000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:16:04.049000 audit[2020]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.049000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffedaab9290 a2=0 a3=0 items=0 ppid=1898 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.049000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:16:04.051000 audit[2022]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.051000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc188a3e10 a2=0 a3=0 items=0 ppid=1898 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:16:04.053000 audit[2024]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.053000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdf555eda0 a2=0 a3=0 items=0 ppid=1898 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.053000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:16:04.059000 audit[2029]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:04.059000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff5fa06fd0 a2=0 a3=0 items=0 ppid=1898 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.059000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:16:04.062000 audit[2031]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:04.062000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe6beaf880 a2=0 a3=0 items=0 ppid=1898 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.062000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:16:04.064000 audit[2033]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:04.064000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff8b3bcf20 a2=0 a3=0 items=0 ppid=1898 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.064000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:16:04.067000 audit[2035]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.067000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd4f3cf160 a2=0 a3=0 items=0 ppid=1898 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.067000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:16:04.070000 audit[2037]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.070000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc215b0ed0 a2=0 a3=0 items=0 ppid=1898 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.070000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:16:04.072000 audit[2039]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:04.072000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffeb4e49250 a2=0 a3=0 items=0 ppid=1898 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.072000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:16:04.082511 systemd-timesyncd[1542]: Network configuration changed, trying to establish connection. Dec 16 03:16:04.102000 audit[2044]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:04.102000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff7575c950 a2=0 a3=0 items=0 ppid=1898 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.102000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 03:16:04.104000 audit[2046]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:04.104000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff055a39e0 a2=0 a3=0 items=0 ppid=1898 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.104000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 03:16:04.114000 audit[2054]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:04.114000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd330c7570 a2=0 a3=0 items=0 ppid=1898 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.114000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 03:16:04.120191 systemd-timesyncd[1542]: Contacted time server 193.203.3.171:123 (2.flatcar.pool.ntp.org). Dec 16 03:16:04.120267 systemd-timesyncd[1542]: Initial clock synchronization to Tue 2025-12-16 03:16:04.348351 UTC. Dec 16 03:16:04.126000 audit[2060]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:04.126000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdb82e0f90 a2=0 a3=0 items=0 ppid=1898 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.126000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 03:16:04.128000 audit[2062]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:04.128000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff36d98d60 a2=0 a3=0 items=0 ppid=1898 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.128000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 03:16:04.131000 audit[2064]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:04.131000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd338a0110 a2=0 a3=0 items=0 ppid=1898 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.131000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 03:16:04.134000 audit[2066]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:04.134000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc41ff1620 a2=0 a3=0 items=0 ppid=1898 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.134000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:16:04.136000 audit[2068]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:04.136000 audit[2068]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffec61a5e70 a2=0 a3=0 items=0 ppid=1898 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:04.136000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 03:16:04.137589 systemd-networkd[1541]: docker0: Link UP Dec 16 03:16:04.143467 dockerd[1898]: time="2025-12-16T03:16:04.143401251Z" level=info msg="Loading containers: done." Dec 16 03:16:04.160278 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck304469183-merged.mount: Deactivated successfully. Dec 16 03:16:04.168391 dockerd[1898]: time="2025-12-16T03:16:04.168313303Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 03:16:04.168583 dockerd[1898]: time="2025-12-16T03:16:04.168438328Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 03:16:04.168583 dockerd[1898]: time="2025-12-16T03:16:04.168568352Z" level=info msg="Initializing buildkit" Dec 16 03:16:04.194702 dockerd[1898]: time="2025-12-16T03:16:04.194622567Z" level=info msg="Completed buildkit initialization" Dec 16 03:16:04.207691 dockerd[1898]: time="2025-12-16T03:16:04.207630002Z" level=info msg="Daemon has completed initialization" Dec 16 03:16:04.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:04.208707 dockerd[1898]: time="2025-12-16T03:16:04.207827874Z" level=info msg="API listen on /run/docker.sock" Dec 16 03:16:04.208054 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 03:16:05.449845 containerd[1630]: time="2025-12-16T03:16:05.449776460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 03:16:06.030211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2199051329.mount: Deactivated successfully. Dec 16 03:16:06.843386 containerd[1630]: time="2025-12-16T03:16:06.843320154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:06.844801 containerd[1630]: time="2025-12-16T03:16:06.844581976Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=27403437" Dec 16 03:16:06.846001 containerd[1630]: time="2025-12-16T03:16:06.845965127Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:06.848581 containerd[1630]: time="2025-12-16T03:16:06.848553024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:06.849241 containerd[1630]: time="2025-12-16T03:16:06.849208479Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 1.399368775s" Dec 16 03:16:06.849284 containerd[1630]: time="2025-12-16T03:16:06.849244955Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 03:16:06.849982 containerd[1630]: time="2025-12-16T03:16:06.849875608Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 03:16:08.177327 containerd[1630]: time="2025-12-16T03:16:08.177266129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:08.178592 containerd[1630]: time="2025-12-16T03:16:08.178443441Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24983855" Dec 16 03:16:08.179438 containerd[1630]: time="2025-12-16T03:16:08.179409519Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:08.181736 containerd[1630]: time="2025-12-16T03:16:08.181717807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:08.182420 containerd[1630]: time="2025-12-16T03:16:08.182389233Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.332332814s" Dec 16 03:16:08.182489 containerd[1630]: time="2025-12-16T03:16:08.182476756Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 03:16:08.183128 containerd[1630]: time="2025-12-16T03:16:08.183101562Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 03:16:09.199321 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 03:16:09.202119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:16:09.320922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:16:09.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:09.326032 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 03:16:09.326110 kernel: audit: type=1130 audit(1765854969.320:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:09.332998 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:16:09.363461 containerd[1630]: time="2025-12-16T03:16:09.362783850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:09.364726 containerd[1630]: time="2025-12-16T03:16:09.364708265Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19396111" Dec 16 03:16:09.365528 containerd[1630]: time="2025-12-16T03:16:09.365510040Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:09.368963 containerd[1630]: time="2025-12-16T03:16:09.368943539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:09.369512 containerd[1630]: time="2025-12-16T03:16:09.369392231Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.18626421s" Dec 16 03:16:09.369637 containerd[1630]: time="2025-12-16T03:16:09.369624575Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 03:16:09.370351 containerd[1630]: time="2025-12-16T03:16:09.370334825Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 03:16:09.372883 kubelet[2180]: E1216 03:16:09.372850 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:16:09.374651 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:16:09.374764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:16:09.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:16:09.375112 systemd[1]: kubelet.service: Consumed 111ms CPU time, 107.8M memory peak. Dec 16 03:16:09.379928 kernel: audit: type=1131 audit(1765854969.374:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:16:10.398184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2875610134.mount: Deactivated successfully. Dec 16 03:16:10.702750 containerd[1630]: time="2025-12-16T03:16:10.702698273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:10.703792 containerd[1630]: time="2025-12-16T03:16:10.703640116Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31157702" Dec 16 03:16:10.704575 containerd[1630]: time="2025-12-16T03:16:10.704547722Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:10.706148 containerd[1630]: time="2025-12-16T03:16:10.706119837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:10.706575 containerd[1630]: time="2025-12-16T03:16:10.706546879Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.335742454s" Dec 16 03:16:10.706656 containerd[1630]: time="2025-12-16T03:16:10.706642811Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 03:16:10.707171 containerd[1630]: time="2025-12-16T03:16:10.707078809Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 03:16:11.200320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2571978379.mount: Deactivated successfully. Dec 16 03:16:12.083517 containerd[1630]: time="2025-12-16T03:16:12.083461937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:12.084596 containerd[1630]: time="2025-12-16T03:16:12.084425669Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Dec 16 03:16:12.085423 containerd[1630]: time="2025-12-16T03:16:12.085399614Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:12.087465 containerd[1630]: time="2025-12-16T03:16:12.087431449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:12.088595 containerd[1630]: time="2025-12-16T03:16:12.088203076Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.380927122s" Dec 16 03:16:12.088595 containerd[1630]: time="2025-12-16T03:16:12.088228484Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 03:16:12.088863 containerd[1630]: time="2025-12-16T03:16:12.088832653Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 03:16:12.961500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2127020419.mount: Deactivated successfully. Dec 16 03:16:12.969222 containerd[1630]: time="2025-12-16T03:16:12.969163894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:16:12.970195 containerd[1630]: time="2025-12-16T03:16:12.970115277Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 03:16:12.972348 containerd[1630]: time="2025-12-16T03:16:12.971160555Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:16:12.973450 containerd[1630]: time="2025-12-16T03:16:12.973407626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:16:12.974235 containerd[1630]: time="2025-12-16T03:16:12.974210172Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 885.348199ms" Dec 16 03:16:12.974336 containerd[1630]: time="2025-12-16T03:16:12.974303327Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 03:16:12.975098 containerd[1630]: time="2025-12-16T03:16:12.975069129Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 03:16:13.651622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547398143.mount: Deactivated successfully. Dec 16 03:16:17.554691 containerd[1630]: time="2025-12-16T03:16:17.554628917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:17.555718 containerd[1630]: time="2025-12-16T03:16:17.555579514Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55833140" Dec 16 03:16:17.556409 containerd[1630]: time="2025-12-16T03:16:17.556387058Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:17.558364 containerd[1630]: time="2025-12-16T03:16:17.558346767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:17.559018 containerd[1630]: time="2025-12-16T03:16:17.558987687Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.583824441s" Dec 16 03:16:17.559071 containerd[1630]: time="2025-12-16T03:16:17.559021110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 03:16:19.449410 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 03:16:19.454012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:16:19.595287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:16:19.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:19.600790 kernel: audit: type=1130 audit(1765854979.594:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:19.600626 (kubelet)[2333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:16:19.639662 kubelet[2333]: E1216 03:16:19.639627 2333 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:16:19.641665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:16:19.641895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:16:19.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:16:19.646855 systemd[1]: kubelet.service: Consumed 127ms CPU time, 110.1M memory peak. Dec 16 03:16:19.647873 kernel: audit: type=1131 audit(1765854979.641:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:16:20.686111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:16:20.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:20.686269 systemd[1]: kubelet.service: Consumed 127ms CPU time, 110.1M memory peak. Dec 16 03:16:20.689951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:16:20.692838 kernel: audit: type=1130 audit(1765854980.684:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:20.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:20.693781 kernel: audit: type=1131 audit(1765854980.684:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:20.722244 systemd[1]: Reload requested from client PID 2349 ('systemctl') (unit session-8.scope)... Dec 16 03:16:20.722369 systemd[1]: Reloading... Dec 16 03:16:20.808787 zram_generator::config[2396]: No configuration found. Dec 16 03:16:20.993932 systemd[1]: Reloading finished in 271 ms. Dec 16 03:16:21.022000 audit: BPF prog-id=63 op=LOAD Dec 16 03:16:21.026782 kernel: audit: type=1334 audit(1765854981.022:299): prog-id=63 op=LOAD Dec 16 03:16:21.022000 audit: BPF prog-id=54 op=UNLOAD Dec 16 03:16:21.025000 audit: BPF prog-id=64 op=LOAD Dec 16 03:16:21.029937 kernel: audit: type=1334 audit(1765854981.022:300): prog-id=54 op=UNLOAD Dec 16 03:16:21.029971 kernel: audit: type=1334 audit(1765854981.025:301): prog-id=64 op=LOAD Dec 16 03:16:21.025000 audit: BPF prog-id=55 op=UNLOAD Dec 16 03:16:21.032113 kernel: audit: type=1334 audit(1765854981.025:302): prog-id=55 op=UNLOAD Dec 16 03:16:21.026000 audit: BPF prog-id=65 op=LOAD Dec 16 03:16:21.039794 kernel: audit: type=1334 audit(1765854981.026:303): prog-id=65 op=LOAD Dec 16 03:16:21.026000 audit: BPF prog-id=66 op=LOAD Dec 16 03:16:21.026000 audit: BPF prog-id=56 op=UNLOAD Dec 16 03:16:21.026000 audit: BPF prog-id=57 op=UNLOAD Dec 16 03:16:21.026000 audit: BPF prog-id=67 op=LOAD Dec 16 03:16:21.033000 audit: BPF prog-id=43 op=UNLOAD Dec 16 03:16:21.033000 audit: BPF prog-id=68 op=LOAD Dec 16 03:16:21.033000 audit: BPF prog-id=69 op=LOAD Dec 16 03:16:21.033000 audit: BPF prog-id=44 op=UNLOAD Dec 16 03:16:21.033000 audit: BPF prog-id=45 op=UNLOAD Dec 16 03:16:21.033000 audit: BPF prog-id=70 op=LOAD Dec 16 03:16:21.033000 audit: BPF prog-id=71 op=LOAD Dec 16 03:16:21.033000 audit: BPF prog-id=49 op=UNLOAD Dec 16 03:16:21.033000 audit: BPF prog-id=50 op=UNLOAD Dec 16 03:16:21.043799 kernel: audit: type=1334 audit(1765854981.026:304): prog-id=66 op=LOAD Dec 16 03:16:21.034000 audit: BPF prog-id=72 op=LOAD Dec 16 03:16:21.034000 audit: BPF prog-id=51 op=UNLOAD Dec 16 03:16:21.034000 audit: BPF prog-id=73 op=LOAD Dec 16 03:16:21.034000 audit: BPF prog-id=74 op=LOAD Dec 16 03:16:21.034000 audit: BPF prog-id=52 op=UNLOAD Dec 16 03:16:21.034000 audit: BPF prog-id=53 op=UNLOAD Dec 16 03:16:21.034000 audit: BPF prog-id=75 op=LOAD Dec 16 03:16:21.034000 audit: BPF prog-id=59 op=UNLOAD Dec 16 03:16:21.035000 audit: BPF prog-id=76 op=LOAD Dec 16 03:16:21.035000 audit: BPF prog-id=46 op=UNLOAD Dec 16 03:16:21.035000 audit: BPF prog-id=77 op=LOAD Dec 16 03:16:21.035000 audit: BPF prog-id=78 op=LOAD Dec 16 03:16:21.035000 audit: BPF prog-id=47 op=UNLOAD Dec 16 03:16:21.035000 audit: BPF prog-id=48 op=UNLOAD Dec 16 03:16:21.039000 audit: BPF prog-id=79 op=LOAD Dec 16 03:16:21.039000 audit: BPF prog-id=60 op=UNLOAD Dec 16 03:16:21.039000 audit: BPF prog-id=80 op=LOAD Dec 16 03:16:21.039000 audit: BPF prog-id=81 op=LOAD Dec 16 03:16:21.039000 audit: BPF prog-id=61 op=UNLOAD Dec 16 03:16:21.039000 audit: BPF prog-id=62 op=UNLOAD Dec 16 03:16:21.040000 audit: BPF prog-id=82 op=LOAD Dec 16 03:16:21.040000 audit: BPF prog-id=58 op=UNLOAD Dec 16 03:16:21.055138 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 03:16:21.055207 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 03:16:21.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:16:21.055441 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:16:21.055498 systemd[1]: kubelet.service: Consumed 78ms CPU time, 97.8M memory peak. Dec 16 03:16:21.056900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:16:21.186231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:16:21.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:21.195069 (kubelet)[2450]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:16:21.244528 kubelet[2450]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:16:21.244528 kubelet[2450]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:16:21.244528 kubelet[2450]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:16:21.244528 kubelet[2450]: I1216 03:16:21.244024 2450 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:16:21.433693 kubelet[2450]: I1216 03:16:21.433662 2450 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 03:16:21.433846 kubelet[2450]: I1216 03:16:21.433836 2450 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:16:21.434286 kubelet[2450]: I1216 03:16:21.434273 2450 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 03:16:21.468075 kubelet[2450]: E1216 03:16:21.468024 2450 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://65.108.246.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 65.108.246.88:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:16:21.470851 kubelet[2450]: I1216 03:16:21.470812 2450 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:16:21.486843 kubelet[2450]: I1216 03:16:21.486793 2450 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:16:21.491511 kubelet[2450]: I1216 03:16:21.491473 2450 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:16:21.493641 kubelet[2450]: I1216 03:16:21.493599 2450 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:16:21.493827 kubelet[2450]: I1216 03:16:21.493633 2450 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-6-1137cb7bd3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:16:21.495678 kubelet[2450]: I1216 03:16:21.495608 2450 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:16:21.495678 kubelet[2450]: I1216 03:16:21.495629 2450 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 03:16:21.497261 kubelet[2450]: I1216 03:16:21.497231 2450 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:16:21.500971 kubelet[2450]: I1216 03:16:21.500890 2450 kubelet.go:446] "Attempting to sync node with API server" Dec 16 03:16:21.500971 kubelet[2450]: I1216 03:16:21.500915 2450 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:16:21.503409 kubelet[2450]: I1216 03:16:21.503287 2450 kubelet.go:352] "Adding apiserver pod source" Dec 16 03:16:21.503409 kubelet[2450]: I1216 03:16:21.503319 2450 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:16:21.508128 kubelet[2450]: W1216 03:16:21.507991 2450 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://65.108.246.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-0-0-6-1137cb7bd3&limit=500&resourceVersion=0": dial tcp 65.108.246.88:6443: connect: connection refused Dec 16 03:16:21.508128 kubelet[2450]: E1216 03:16:21.508033 2450 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://65.108.246.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-0-0-6-1137cb7bd3&limit=500&resourceVersion=0\": dial tcp 65.108.246.88:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:16:21.509119 kubelet[2450]: I1216 03:16:21.509088 2450 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:16:21.513040 kubelet[2450]: I1216 03:16:21.513004 2450 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 03:16:21.513110 kubelet[2450]: W1216 03:16:21.513100 2450 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 03:16:21.518474 kubelet[2450]: W1216 03:16:21.517854 2450 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://65.108.246.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 65.108.246.88:6443: connect: connection refused Dec 16 03:16:21.518474 kubelet[2450]: E1216 03:16:21.517903 2450 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://65.108.246.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.108.246.88:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:16:21.518474 kubelet[2450]: I1216 03:16:21.518325 2450 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:16:21.518474 kubelet[2450]: I1216 03:16:21.518349 2450 server.go:1287] "Started kubelet" Dec 16 03:16:21.519071 kubelet[2450]: I1216 03:16:21.519022 2450 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:16:21.525400 kubelet[2450]: I1216 03:16:21.525338 2450 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:16:21.525798 kubelet[2450]: I1216 03:16:21.525752 2450 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:16:21.530490 kubelet[2450]: E1216 03:16:21.528110 2450 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://65.108.246.88:6443/api/v1/namespaces/default/events\": dial tcp 65.108.246.88:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547-0-0-6-1137cb7bd3.188193bff24118dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547-0-0-6-1137cb7bd3,UID:ci-4547-0-0-6-1137cb7bd3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547-0-0-6-1137cb7bd3,},FirstTimestamp:2025-12-16 03:16:21.518334172 +0000 UTC m=+0.319017282,LastTimestamp:2025-12-16 03:16:21.518334172 +0000 UTC m=+0.319017282,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-6-1137cb7bd3,}" Dec 16 03:16:21.531186 kubelet[2450]: I1216 03:16:21.531166 2450 server.go:479] "Adding debug handlers to kubelet server" Dec 16 03:16:21.532972 kubelet[2450]: I1216 03:16:21.532907 2450 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:16:21.536801 kubelet[2450]: I1216 03:16:21.535709 2450 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:16:21.540880 kubelet[2450]: I1216 03:16:21.540806 2450 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:16:21.540964 kubelet[2450]: E1216 03:16:21.540946 2450 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" Dec 16 03:16:21.542710 kubelet[2450]: W1216 03:16:21.542594 2450 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://65.108.246.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 65.108.246.88:6443: connect: connection refused Dec 16 03:16:21.542845 kubelet[2450]: E1216 03:16:21.542830 2450 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://65.108.246.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.108.246.88:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:16:21.542934 kubelet[2450]: E1216 03:16:21.542680 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.108.246.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-6-1137cb7bd3?timeout=10s\": dial tcp 65.108.246.88:6443: connect: connection refused" interval="200ms" Dec 16 03:16:21.543107 kubelet[2450]: I1216 03:16:21.543095 2450 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:16:21.543301 kubelet[2450]: I1216 03:16:21.543290 2450 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:16:21.541000 audit[2461]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:21.541000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe22e79d80 a2=0 a3=0 items=0 ppid=2450 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:16:21.545000 audit[2462]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:21.545000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd5bd39a0 a2=0 a3=0 items=0 ppid=2450 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.545000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:16:21.548131 kubelet[2450]: I1216 03:16:21.547947 2450 factory.go:221] Registration of the systemd container factory successfully Dec 16 03:16:21.548131 kubelet[2450]: I1216 03:16:21.548006 2450 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:16:21.548000 audit[2464]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:21.548000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe5df3fc30 a2=0 a3=0 items=0 ppid=2450 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.548000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:16:21.551125 kubelet[2450]: E1216 03:16:21.551113 2450 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:16:21.551557 kubelet[2450]: I1216 03:16:21.551468 2450 factory.go:221] Registration of the containerd container factory successfully Dec 16 03:16:21.551000 audit[2466]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:21.551000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcdcf18290 a2=0 a3=0 items=0 ppid=2450 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.551000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:16:21.558000 audit[2469]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:21.558000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe367897b0 a2=0 a3=0 items=0 ppid=2450 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.558000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 03:16:21.560631 kubelet[2450]: I1216 03:16:21.560573 2450 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 03:16:21.559000 audit[2470]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:21.559000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe3436a820 a2=0 a3=0 items=0 ppid=2450 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.559000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:16:21.562030 kubelet[2450]: I1216 03:16:21.561828 2450 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 03:16:21.562030 kubelet[2450]: I1216 03:16:21.561846 2450 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 03:16:21.562030 kubelet[2450]: I1216 03:16:21.561864 2450 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:16:21.562030 kubelet[2450]: I1216 03:16:21.561871 2450 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 03:16:21.562030 kubelet[2450]: E1216 03:16:21.561904 2450 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:16:21.560000 audit[2471]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:21.560000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1a2429b0 a2=0 a3=0 items=0 ppid=2450 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.560000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:16:21.563000 audit[2472]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:21.563000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd592d8510 a2=0 a3=0 items=0 ppid=2450 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.563000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:16:21.564000 audit[2474]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:21.564000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffcf399940 a2=0 a3=0 items=0 ppid=2450 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:16:21.565000 audit[2475]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:21.565000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe162607e0 a2=0 a3=0 items=0 ppid=2450 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.565000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:16:21.566000 audit[2476]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:21.566000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff497e3040 a2=0 a3=0 items=0 ppid=2450 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:16:21.567000 audit[2477]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:21.567000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd261e37c0 a2=0 a3=0 items=0 ppid=2450 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:21.567000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:16:21.569477 kubelet[2450]: W1216 03:16:21.569444 2450 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://65.108.246.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 65.108.246.88:6443: connect: connection refused Dec 16 03:16:21.569866 kubelet[2450]: E1216 03:16:21.569848 2450 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://65.108.246.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.108.246.88:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:16:21.583429 kubelet[2450]: I1216 03:16:21.583366 2450 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:16:21.583429 kubelet[2450]: I1216 03:16:21.583381 2450 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:16:21.583429 kubelet[2450]: I1216 03:16:21.583396 2450 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:16:21.585886 kubelet[2450]: I1216 03:16:21.585673 2450 policy_none.go:49] "None policy: Start" Dec 16 03:16:21.585886 kubelet[2450]: I1216 03:16:21.585690 2450 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:16:21.585886 kubelet[2450]: I1216 03:16:21.585701 2450 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:16:21.592492 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 03:16:21.601527 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 03:16:21.605101 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 03:16:21.616770 kubelet[2450]: I1216 03:16:21.616600 2450 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 03:16:21.617631 kubelet[2450]: I1216 03:16:21.617295 2450 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:16:21.617631 kubelet[2450]: I1216 03:16:21.617307 2450 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:16:21.617631 kubelet[2450]: I1216 03:16:21.617565 2450 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:16:21.619697 kubelet[2450]: E1216 03:16:21.619650 2450 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:16:21.619978 kubelet[2450]: E1216 03:16:21.619904 2450 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547-0-0-6-1137cb7bd3\" not found" Dec 16 03:16:21.673429 systemd[1]: Created slice kubepods-burstable-podffdd102986f65bccb1603f02203a19ea.slice - libcontainer container kubepods-burstable-podffdd102986f65bccb1603f02203a19ea.slice. Dec 16 03:16:21.684786 kubelet[2450]: E1216 03:16:21.684661 2450 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.687188 systemd[1]: Created slice kubepods-burstable-pod6a7966c67a25a133a6fe97cc36798353.slice - libcontainer container kubepods-burstable-pod6a7966c67a25a133a6fe97cc36798353.slice. Dec 16 03:16:21.689992 kubelet[2450]: E1216 03:16:21.689967 2450 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.692358 systemd[1]: Created slice kubepods-burstable-podbd737292ee6c34a65b3fc87526147c96.slice - libcontainer container kubepods-burstable-podbd737292ee6c34a65b3fc87526147c96.slice. Dec 16 03:16:21.693927 kubelet[2450]: E1216 03:16:21.693905 2450 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.718861 kubelet[2450]: I1216 03:16:21.718836 2450 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.719128 kubelet[2450]: E1216 03:16:21.719099 2450 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.108.246.88:6443/api/v1/nodes\": dial tcp 65.108.246.88:6443: connect: connection refused" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.743670 kubelet[2450]: E1216 03:16:21.743631 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.108.246.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-6-1137cb7bd3?timeout=10s\": dial tcp 65.108.246.88:6443: connect: connection refused" interval="400ms" Dec 16 03:16:21.744888 kubelet[2450]: I1216 03:16:21.744814 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a7966c67a25a133a6fe97cc36798353-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-6-1137cb7bd3\" (UID: \"6a7966c67a25a133a6fe97cc36798353\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.744888 kubelet[2450]: I1216 03:16:21.744842 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a7966c67a25a133a6fe97cc36798353-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-6-1137cb7bd3\" (UID: \"6a7966c67a25a133a6fe97cc36798353\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.744888 kubelet[2450]: I1216 03:16:21.744859 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffdd102986f65bccb1603f02203a19ea-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-6-1137cb7bd3\" (UID: \"ffdd102986f65bccb1603f02203a19ea\") " pod="kube-system/kube-apiserver-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.744888 kubelet[2450]: I1216 03:16:21.744875 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffdd102986f65bccb1603f02203a19ea-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-6-1137cb7bd3\" (UID: \"ffdd102986f65bccb1603f02203a19ea\") " pod="kube-system/kube-apiserver-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.745097 kubelet[2450]: I1216 03:16:21.744895 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6a7966c67a25a133a6fe97cc36798353-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-6-1137cb7bd3\" (UID: \"6a7966c67a25a133a6fe97cc36798353\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.745097 kubelet[2450]: I1216 03:16:21.744926 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a7966c67a25a133a6fe97cc36798353-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-6-1137cb7bd3\" (UID: \"6a7966c67a25a133a6fe97cc36798353\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.745097 kubelet[2450]: I1216 03:16:21.744956 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a7966c67a25a133a6fe97cc36798353-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-6-1137cb7bd3\" (UID: \"6a7966c67a25a133a6fe97cc36798353\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.745097 kubelet[2450]: I1216 03:16:21.744978 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd737292ee6c34a65b3fc87526147c96-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-6-1137cb7bd3\" (UID: \"bd737292ee6c34a65b3fc87526147c96\") " pod="kube-system/kube-scheduler-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.745097 kubelet[2450]: I1216 03:16:21.744991 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffdd102986f65bccb1603f02203a19ea-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-6-1137cb7bd3\" (UID: \"ffdd102986f65bccb1603f02203a19ea\") " pod="kube-system/kube-apiserver-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.921710 kubelet[2450]: I1216 03:16:21.921588 2450 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.923884 kubelet[2450]: E1216 03:16:21.923840 2450 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.108.246.88:6443/api/v1/nodes\": dial tcp 65.108.246.88:6443: connect: connection refused" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:21.986741 containerd[1630]: time="2025-12-16T03:16:21.986658126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-6-1137cb7bd3,Uid:ffdd102986f65bccb1603f02203a19ea,Namespace:kube-system,Attempt:0,}" Dec 16 03:16:21.995354 containerd[1630]: time="2025-12-16T03:16:21.995148117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-6-1137cb7bd3,Uid:6a7966c67a25a133a6fe97cc36798353,Namespace:kube-system,Attempt:0,}" Dec 16 03:16:21.996574 containerd[1630]: time="2025-12-16T03:16:21.996530746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-6-1137cb7bd3,Uid:bd737292ee6c34a65b3fc87526147c96,Namespace:kube-system,Attempt:0,}" Dec 16 03:16:22.099656 containerd[1630]: time="2025-12-16T03:16:22.099539332Z" level=info msg="connecting to shim d49818394ab21648d6cc79ed472cd8c97448bf63fa5f91398acbc113af039305" address="unix:///run/containerd/s/be3ffc8131eee1b2a0a83e5990898f26d31554997e3f15fdf3ccad4dffe763d0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:16:22.104587 containerd[1630]: time="2025-12-16T03:16:22.104559613Z" level=info msg="connecting to shim 6b5244202d48a644e4ac774b9507dfbe436d1d71602b39a79ddcc60c8152b563" address="unix:///run/containerd/s/5abb251d5a4f9791fd615e473166bf29cb0ff6849b0ebb7bce1fed980c31c752" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:16:22.105482 containerd[1630]: time="2025-12-16T03:16:22.105007378Z" level=info msg="connecting to shim 551c6641841b9af5e8df00dcc7b1577d563da73ad1e337d0401e70779e85bf95" address="unix:///run/containerd/s/5df3d166f563b55d98c01ef544795a5e1f9a0509f20b7e396c4b5d5b124147f3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:16:22.144347 kubelet[2450]: E1216 03:16:22.144292 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.108.246.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-6-1137cb7bd3?timeout=10s\": dial tcp 65.108.246.88:6443: connect: connection refused" interval="800ms" Dec 16 03:16:22.186909 systemd[1]: Started cri-containerd-551c6641841b9af5e8df00dcc7b1577d563da73ad1e337d0401e70779e85bf95.scope - libcontainer container 551c6641841b9af5e8df00dcc7b1577d563da73ad1e337d0401e70779e85bf95. Dec 16 03:16:22.187770 systemd[1]: Started cri-containerd-6b5244202d48a644e4ac774b9507dfbe436d1d71602b39a79ddcc60c8152b563.scope - libcontainer container 6b5244202d48a644e4ac774b9507dfbe436d1d71602b39a79ddcc60c8152b563. Dec 16 03:16:22.188485 systemd[1]: Started cri-containerd-d49818394ab21648d6cc79ed472cd8c97448bf63fa5f91398acbc113af039305.scope - libcontainer container d49818394ab21648d6cc79ed472cd8c97448bf63fa5f91398acbc113af039305. Dec 16 03:16:22.201000 audit: BPF prog-id=83 op=LOAD Dec 16 03:16:22.201000 audit: BPF prog-id=84 op=LOAD Dec 16 03:16:22.201000 audit[2539]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2514 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535316336363431383431623961663565386466303064636337623135 Dec 16 03:16:22.201000 audit: BPF prog-id=84 op=UNLOAD Dec 16 03:16:22.201000 audit[2539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535316336363431383431623961663565386466303064636337623135 Dec 16 03:16:22.201000 audit: BPF prog-id=85 op=LOAD Dec 16 03:16:22.201000 audit[2539]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2514 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535316336363431383431623961663565386466303064636337623135 Dec 16 03:16:22.201000 audit: BPF prog-id=86 op=LOAD Dec 16 03:16:22.201000 audit[2539]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2514 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535316336363431383431623961663565386466303064636337623135 Dec 16 03:16:22.201000 audit: BPF prog-id=86 op=UNLOAD Dec 16 03:16:22.201000 audit[2539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535316336363431383431623961663565386466303064636337623135 Dec 16 03:16:22.201000 audit: BPF prog-id=85 op=UNLOAD Dec 16 03:16:22.201000 audit[2539]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535316336363431383431623961663565386466303064636337623135 Dec 16 03:16:22.201000 audit: BPF prog-id=87 op=LOAD Dec 16 03:16:22.201000 audit[2539]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2514 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535316336363431383431623961663565386466303064636337623135 Dec 16 03:16:22.207000 audit: BPF prog-id=88 op=LOAD Dec 16 03:16:22.208000 audit: BPF prog-id=89 op=LOAD Dec 16 03:16:22.208000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2512 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662353234343230326434386136343465346163373734623935303764 Dec 16 03:16:22.208000 audit: BPF prog-id=89 op=UNLOAD Dec 16 03:16:22.208000 audit[2544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662353234343230326434386136343465346163373734623935303764 Dec 16 03:16:22.209000 audit: BPF prog-id=90 op=LOAD Dec 16 03:16:22.209000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2512 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662353234343230326434386136343465346163373734623935303764 Dec 16 03:16:22.209000 audit: BPF prog-id=91 op=LOAD Dec 16 03:16:22.209000 audit: BPF prog-id=92 op=LOAD Dec 16 03:16:22.209000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2512 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662353234343230326434386136343465346163373734623935303764 Dec 16 03:16:22.209000 audit: BPF prog-id=91 op=UNLOAD Dec 16 03:16:22.209000 audit[2544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662353234343230326434386136343465346163373734623935303764 Dec 16 03:16:22.209000 audit: BPF prog-id=90 op=UNLOAD Dec 16 03:16:22.209000 audit[2544]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662353234343230326434386136343465346163373734623935303764 Dec 16 03:16:22.209000 audit: BPF prog-id=93 op=LOAD Dec 16 03:16:22.209000 audit[2536]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2504 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434393831383339346162323136343864366363373965643437326364 Dec 16 03:16:22.210000 audit: BPF prog-id=93 op=UNLOAD Dec 16 03:16:22.210000 audit[2536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434393831383339346162323136343864366363373965643437326364 Dec 16 03:16:22.210000 audit: BPF prog-id=94 op=LOAD Dec 16 03:16:22.210000 audit: BPF prog-id=95 op=LOAD Dec 16 03:16:22.210000 audit[2544]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2512 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.210000 audit[2536]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2504 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434393831383339346162323136343864366363373965643437326364 Dec 16 03:16:22.211000 audit: BPF prog-id=96 op=LOAD Dec 16 03:16:22.211000 audit[2536]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2504 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434393831383339346162323136343864366363373965643437326364 Dec 16 03:16:22.211000 audit: BPF prog-id=96 op=UNLOAD Dec 16 03:16:22.211000 audit[2536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434393831383339346162323136343864366363373965643437326364 Dec 16 03:16:22.211000 audit: BPF prog-id=95 op=UNLOAD Dec 16 03:16:22.211000 audit[2536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434393831383339346162323136343864366363373965643437326364 Dec 16 03:16:22.211000 audit: BPF prog-id=97 op=LOAD Dec 16 03:16:22.211000 audit[2536]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2504 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434393831383339346162323136343864366363373965643437326364 Dec 16 03:16:22.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662353234343230326434386136343465346163373734623935303764 Dec 16 03:16:22.253152 containerd[1630]: time="2025-12-16T03:16:22.252919906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-6-1137cb7bd3,Uid:bd737292ee6c34a65b3fc87526147c96,Namespace:kube-system,Attempt:0,} returns sandbox id \"551c6641841b9af5e8df00dcc7b1577d563da73ad1e337d0401e70779e85bf95\"" Dec 16 03:16:22.261503 containerd[1630]: time="2025-12-16T03:16:22.261447349Z" level=info msg="CreateContainer within sandbox \"551c6641841b9af5e8df00dcc7b1577d563da73ad1e337d0401e70779e85bf95\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 03:16:22.276695 containerd[1630]: time="2025-12-16T03:16:22.276657069Z" level=info msg="Container c68807966e424b84d40d947c29a0a990fa356c2fb52c7c8319f7dbba42a4e44d: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:16:22.279038 containerd[1630]: time="2025-12-16T03:16:22.278959104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-6-1137cb7bd3,Uid:6a7966c67a25a133a6fe97cc36798353,Namespace:kube-system,Attempt:0,} returns sandbox id \"d49818394ab21648d6cc79ed472cd8c97448bf63fa5f91398acbc113af039305\"" Dec 16 03:16:22.280436 containerd[1630]: time="2025-12-16T03:16:22.280194950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-6-1137cb7bd3,Uid:ffdd102986f65bccb1603f02203a19ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b5244202d48a644e4ac774b9507dfbe436d1d71602b39a79ddcc60c8152b563\"" Dec 16 03:16:22.281938 containerd[1630]: time="2025-12-16T03:16:22.281903378Z" level=info msg="CreateContainer within sandbox \"d49818394ab21648d6cc79ed472cd8c97448bf63fa5f91398acbc113af039305\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 03:16:22.283487 containerd[1630]: time="2025-12-16T03:16:22.283419060Z" level=info msg="CreateContainer within sandbox \"6b5244202d48a644e4ac774b9507dfbe436d1d71602b39a79ddcc60c8152b563\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 03:16:22.285714 containerd[1630]: time="2025-12-16T03:16:22.285696657Z" level=info msg="CreateContainer within sandbox \"551c6641841b9af5e8df00dcc7b1577d563da73ad1e337d0401e70779e85bf95\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c68807966e424b84d40d947c29a0a990fa356c2fb52c7c8319f7dbba42a4e44d\"" Dec 16 03:16:22.286791 containerd[1630]: time="2025-12-16T03:16:22.286214508Z" level=info msg="StartContainer for \"c68807966e424b84d40d947c29a0a990fa356c2fb52c7c8319f7dbba42a4e44d\"" Dec 16 03:16:22.287073 containerd[1630]: time="2025-12-16T03:16:22.287056559Z" level=info msg="connecting to shim c68807966e424b84d40d947c29a0a990fa356c2fb52c7c8319f7dbba42a4e44d" address="unix:///run/containerd/s/5df3d166f563b55d98c01ef544795a5e1f9a0509f20b7e396c4b5d5b124147f3" protocol=ttrpc version=3 Dec 16 03:16:22.292629 containerd[1630]: time="2025-12-16T03:16:22.292597635Z" level=info msg="Container ea559b202d2eb23c511fd3586c2c37061797c852658d1964b1108d3139b6457b: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:16:22.302999 containerd[1630]: time="2025-12-16T03:16:22.302963644Z" level=info msg="CreateContainer within sandbox \"d49818394ab21648d6cc79ed472cd8c97448bf63fa5f91398acbc113af039305\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ea559b202d2eb23c511fd3586c2c37061797c852658d1964b1108d3139b6457b\"" Dec 16 03:16:22.304642 containerd[1630]: time="2025-12-16T03:16:22.303660908Z" level=info msg="StartContainer for \"ea559b202d2eb23c511fd3586c2c37061797c852658d1964b1108d3139b6457b\"" Dec 16 03:16:22.304642 containerd[1630]: time="2025-12-16T03:16:22.304149268Z" level=info msg="Container a28e870d4079ccd6b0a5b6fc7b82f00100cf2ffaeab005ad131c4c212cbb1a62: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:16:22.304642 containerd[1630]: time="2025-12-16T03:16:22.304368914Z" level=info msg="connecting to shim ea559b202d2eb23c511fd3586c2c37061797c852658d1964b1108d3139b6457b" address="unix:///run/containerd/s/be3ffc8131eee1b2a0a83e5990898f26d31554997e3f15fdf3ccad4dffe763d0" protocol=ttrpc version=3 Dec 16 03:16:22.306915 systemd[1]: Started cri-containerd-c68807966e424b84d40d947c29a0a990fa356c2fb52c7c8319f7dbba42a4e44d.scope - libcontainer container c68807966e424b84d40d947c29a0a990fa356c2fb52c7c8319f7dbba42a4e44d. Dec 16 03:16:22.315645 containerd[1630]: time="2025-12-16T03:16:22.315502465Z" level=info msg="CreateContainer within sandbox \"6b5244202d48a644e4ac774b9507dfbe436d1d71602b39a79ddcc60c8152b563\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a28e870d4079ccd6b0a5b6fc7b82f00100cf2ffaeab005ad131c4c212cbb1a62\"" Dec 16 03:16:22.317511 containerd[1630]: time="2025-12-16T03:16:22.317406223Z" level=info msg="StartContainer for \"a28e870d4079ccd6b0a5b6fc7b82f00100cf2ffaeab005ad131c4c212cbb1a62\"" Dec 16 03:16:22.319345 containerd[1630]: time="2025-12-16T03:16:22.319327376Z" level=info msg="connecting to shim a28e870d4079ccd6b0a5b6fc7b82f00100cf2ffaeab005ad131c4c212cbb1a62" address="unix:///run/containerd/s/5abb251d5a4f9791fd615e473166bf29cb0ff6849b0ebb7bce1fed980c31c752" protocol=ttrpc version=3 Dec 16 03:16:22.322906 systemd[1]: Started cri-containerd-ea559b202d2eb23c511fd3586c2c37061797c852658d1964b1108d3139b6457b.scope - libcontainer container ea559b202d2eb23c511fd3586c2c37061797c852658d1964b1108d3139b6457b. Dec 16 03:16:22.326276 kubelet[2450]: I1216 03:16:22.326246 2450 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:22.325000 audit: BPF prog-id=98 op=LOAD Dec 16 03:16:22.325000 audit: BPF prog-id=99 op=LOAD Dec 16 03:16:22.325000 audit[2621]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2514 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383830373936366534323462383464343064393437633239613061 Dec 16 03:16:22.326000 audit: BPF prog-id=99 op=UNLOAD Dec 16 03:16:22.326000 audit[2621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383830373936366534323462383464343064393437633239613061 Dec 16 03:16:22.326000 audit: BPF prog-id=100 op=LOAD Dec 16 03:16:22.326000 audit[2621]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2514 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383830373936366534323462383464343064393437633239613061 Dec 16 03:16:22.326000 audit: BPF prog-id=101 op=LOAD Dec 16 03:16:22.326000 audit[2621]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2514 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383830373936366534323462383464343064393437633239613061 Dec 16 03:16:22.326000 audit: BPF prog-id=101 op=UNLOAD Dec 16 03:16:22.326000 audit[2621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383830373936366534323462383464343064393437633239613061 Dec 16 03:16:22.326000 audit: BPF prog-id=100 op=UNLOAD Dec 16 03:16:22.326000 audit[2621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383830373936366534323462383464343064393437633239613061 Dec 16 03:16:22.326000 audit: BPF prog-id=102 op=LOAD Dec 16 03:16:22.326000 audit[2621]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2514 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336383830373936366534323462383464343064393437633239613061 Dec 16 03:16:22.327692 kubelet[2450]: E1216 03:16:22.327662 2450 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.108.246.88:6443/api/v1/nodes\": dial tcp 65.108.246.88:6443: connect: connection refused" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:22.341016 systemd[1]: Started cri-containerd-a28e870d4079ccd6b0a5b6fc7b82f00100cf2ffaeab005ad131c4c212cbb1a62.scope - libcontainer container a28e870d4079ccd6b0a5b6fc7b82f00100cf2ffaeab005ad131c4c212cbb1a62. Dec 16 03:16:22.346000 audit: BPF prog-id=103 op=LOAD Dec 16 03:16:22.346000 audit: BPF prog-id=104 op=LOAD Dec 16 03:16:22.346000 audit[2635]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2504 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.346000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353539623230326432656232336335313166643335383663326333 Dec 16 03:16:22.347000 audit: BPF prog-id=104 op=UNLOAD Dec 16 03:16:22.347000 audit[2635]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353539623230326432656232336335313166643335383663326333 Dec 16 03:16:22.347000 audit: BPF prog-id=105 op=LOAD Dec 16 03:16:22.347000 audit[2635]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2504 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353539623230326432656232336335313166643335383663326333 Dec 16 03:16:22.347000 audit: BPF prog-id=106 op=LOAD Dec 16 03:16:22.347000 audit[2635]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2504 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353539623230326432656232336335313166643335383663326333 Dec 16 03:16:22.347000 audit: BPF prog-id=106 op=UNLOAD Dec 16 03:16:22.347000 audit[2635]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353539623230326432656232336335313166643335383663326333 Dec 16 03:16:22.347000 audit: BPF prog-id=105 op=UNLOAD Dec 16 03:16:22.347000 audit[2635]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353539623230326432656232336335313166643335383663326333 Dec 16 03:16:22.347000 audit: BPF prog-id=107 op=LOAD Dec 16 03:16:22.347000 audit[2635]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2504 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353539623230326432656232336335313166643335383663326333 Dec 16 03:16:22.359000 audit: BPF prog-id=108 op=LOAD Dec 16 03:16:22.359000 audit: BPF prog-id=109 op=LOAD Dec 16 03:16:22.359000 audit[2651]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2512 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132386538373064343037396363643662306135623666633762383266 Dec 16 03:16:22.359000 audit: BPF prog-id=109 op=UNLOAD Dec 16 03:16:22.359000 audit[2651]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132386538373064343037396363643662306135623666633762383266 Dec 16 03:16:22.361000 audit: BPF prog-id=110 op=LOAD Dec 16 03:16:22.361000 audit[2651]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2512 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132386538373064343037396363643662306135623666633762383266 Dec 16 03:16:22.361000 audit: BPF prog-id=111 op=LOAD Dec 16 03:16:22.361000 audit[2651]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2512 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132386538373064343037396363643662306135623666633762383266 Dec 16 03:16:22.361000 audit: BPF prog-id=111 op=UNLOAD Dec 16 03:16:22.361000 audit[2651]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132386538373064343037396363643662306135623666633762383266 Dec 16 03:16:22.361000 audit: BPF prog-id=110 op=UNLOAD Dec 16 03:16:22.361000 audit[2651]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132386538373064343037396363643662306135623666633762383266 Dec 16 03:16:22.361000 audit: BPF prog-id=112 op=LOAD Dec 16 03:16:22.361000 audit[2651]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2512 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:22.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132386538373064343037396363643662306135623666633762383266 Dec 16 03:16:22.397129 containerd[1630]: time="2025-12-16T03:16:22.397046622Z" level=info msg="StartContainer for \"ea559b202d2eb23c511fd3586c2c37061797c852658d1964b1108d3139b6457b\" returns successfully" Dec 16 03:16:22.403527 containerd[1630]: time="2025-12-16T03:16:22.403273499Z" level=info msg="StartContainer for \"c68807966e424b84d40d947c29a0a990fa356c2fb52c7c8319f7dbba42a4e44d\" returns successfully" Dec 16 03:16:22.411039 containerd[1630]: time="2025-12-16T03:16:22.410921742Z" level=info msg="StartContainer for \"a28e870d4079ccd6b0a5b6fc7b82f00100cf2ffaeab005ad131c4c212cbb1a62\" returns successfully" Dec 16 03:16:22.583943 kubelet[2450]: E1216 03:16:22.583497 2450 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:22.589200 kubelet[2450]: E1216 03:16:22.589184 2450 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:22.590775 kubelet[2450]: E1216 03:16:22.589996 2450 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:22.692032 kubelet[2450]: W1216 03:16:22.691981 2450 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://65.108.246.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 65.108.246.88:6443: connect: connection refused Dec 16 03:16:22.692832 kubelet[2450]: E1216 03:16:22.692811 2450 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://65.108.246.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.108.246.88:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:16:22.694077 kubelet[2450]: W1216 03:16:22.694021 2450 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://65.108.246.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-0-0-6-1137cb7bd3&limit=500&resourceVersion=0": dial tcp 65.108.246.88:6443: connect: connection refused Dec 16 03:16:22.694077 kubelet[2450]: E1216 03:16:22.694058 2450 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://65.108.246.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-0-0-6-1137cb7bd3&limit=500&resourceVersion=0\": dial tcp 65.108.246.88:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:16:22.724640 kubelet[2450]: W1216 03:16:22.724571 2450 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://65.108.246.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 65.108.246.88:6443: connect: connection refused Dec 16 03:16:22.724640 kubelet[2450]: E1216 03:16:22.724620 2450 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://65.108.246.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.108.246.88:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:16:23.129219 kubelet[2450]: I1216 03:16:23.129191 2450 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:23.591719 kubelet[2450]: E1216 03:16:23.591686 2450 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:23.592841 kubelet[2450]: E1216 03:16:23.592822 2450 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:24.143971 kubelet[2450]: E1216 03:16:24.143936 2450 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547-0-0-6-1137cb7bd3\" not found" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:24.208002 kubelet[2450]: I1216 03:16:24.207941 2450 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:24.208002 kubelet[2450]: E1216 03:16:24.207970 2450 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4547-0-0-6-1137cb7bd3\": node \"ci-4547-0-0-6-1137cb7bd3\" not found" Dec 16 03:16:24.223195 kubelet[2450]: E1216 03:16:24.223140 2450 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" Dec 16 03:16:24.323868 kubelet[2450]: E1216 03:16:24.323825 2450 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" Dec 16 03:16:24.425032 kubelet[2450]: E1216 03:16:24.424912 2450 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" Dec 16 03:16:24.525678 kubelet[2450]: E1216 03:16:24.525613 2450 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" Dec 16 03:16:24.626203 kubelet[2450]: E1216 03:16:24.626140 2450 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" Dec 16 03:16:24.726803 kubelet[2450]: E1216 03:16:24.726737 2450 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" Dec 16 03:16:24.827464 kubelet[2450]: E1216 03:16:24.827427 2450 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" Dec 16 03:16:24.927682 kubelet[2450]: E1216 03:16:24.927631 2450 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" Dec 16 03:16:25.028167 kubelet[2450]: E1216 03:16:25.028040 2450 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" Dec 16 03:16:25.128986 kubelet[2450]: E1216 03:16:25.128879 2450 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" Dec 16 03:16:25.141545 kubelet[2450]: I1216 03:16:25.141466 2450 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:25.153063 kubelet[2450]: I1216 03:16:25.152324 2450 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:25.156069 kubelet[2450]: I1216 03:16:25.156037 2450 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:25.518992 kubelet[2450]: I1216 03:16:25.518928 2450 apiserver.go:52] "Watching apiserver" Dec 16 03:16:25.541927 kubelet[2450]: I1216 03:16:25.541885 2450 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:16:26.160929 systemd[1]: Reload requested from client PID 2718 ('systemctl') (unit session-8.scope)... Dec 16 03:16:26.160952 systemd[1]: Reloading... Dec 16 03:16:26.256832 zram_generator::config[2764]: No configuration found. Dec 16 03:16:26.437875 systemd[1]: Reloading finished in 276 ms. Dec 16 03:16:26.473692 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:16:26.485678 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 03:16:26.486062 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:16:26.492092 kernel: kauditd_printk_skb: 204 callbacks suppressed Dec 16 03:16:26.492130 kernel: audit: type=1131 audit(1765854986.484:401): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:26.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:26.486131 systemd[1]: kubelet.service: Consumed 609ms CPU time, 129.3M memory peak. Dec 16 03:16:26.488955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:16:26.494693 kernel: audit: type=1334 audit(1765854986.489:402): prog-id=113 op=LOAD Dec 16 03:16:26.489000 audit: BPF prog-id=113 op=LOAD Dec 16 03:16:26.489000 audit: BPF prog-id=63 op=UNLOAD Dec 16 03:16:26.490000 audit: BPF prog-id=114 op=LOAD Dec 16 03:16:26.490000 audit: BPF prog-id=76 op=UNLOAD Dec 16 03:16:26.490000 audit: BPF prog-id=115 op=LOAD Dec 16 03:16:26.490000 audit: BPF prog-id=116 op=LOAD Dec 16 03:16:26.490000 audit: BPF prog-id=77 op=UNLOAD Dec 16 03:16:26.491000 audit: BPF prog-id=78 op=UNLOAD Dec 16 03:16:26.493000 audit: BPF prog-id=117 op=LOAD Dec 16 03:16:26.493000 audit: BPF prog-id=64 op=UNLOAD Dec 16 03:16:26.493000 audit: BPF prog-id=118 op=LOAD Dec 16 03:16:26.493000 audit: BPF prog-id=119 op=LOAD Dec 16 03:16:26.493000 audit: BPF prog-id=65 op=UNLOAD Dec 16 03:16:26.493000 audit: BPF prog-id=66 op=UNLOAD Dec 16 03:16:26.494000 audit: BPF prog-id=120 op=LOAD Dec 16 03:16:26.494000 audit: BPF prog-id=67 op=UNLOAD Dec 16 03:16:26.494000 audit: BPF prog-id=121 op=LOAD Dec 16 03:16:26.494000 audit: BPF prog-id=122 op=LOAD Dec 16 03:16:26.494000 audit: BPF prog-id=68 op=UNLOAD Dec 16 03:16:26.494000 audit: BPF prog-id=69 op=UNLOAD Dec 16 03:16:26.495000 audit: BPF prog-id=123 op=LOAD Dec 16 03:16:26.495000 audit: BPF prog-id=82 op=UNLOAD Dec 16 03:16:26.497782 kernel: audit: type=1334 audit(1765854986.489:403): prog-id=63 op=UNLOAD Dec 16 03:16:26.497811 kernel: audit: type=1334 audit(1765854986.490:404): prog-id=114 op=LOAD Dec 16 03:16:26.497824 kernel: audit: type=1334 audit(1765854986.490:405): prog-id=76 op=UNLOAD Dec 16 03:16:26.497836 kernel: audit: type=1334 audit(1765854986.490:406): prog-id=115 op=LOAD Dec 16 03:16:26.497848 kernel: audit: type=1334 audit(1765854986.490:407): prog-id=116 op=LOAD Dec 16 03:16:26.497859 kernel: audit: type=1334 audit(1765854986.490:408): prog-id=77 op=UNLOAD Dec 16 03:16:26.497872 kernel: audit: type=1334 audit(1765854986.491:409): prog-id=78 op=UNLOAD Dec 16 03:16:26.497885 kernel: audit: type=1334 audit(1765854986.493:410): prog-id=117 op=LOAD Dec 16 03:16:26.498000 audit: BPF prog-id=124 op=LOAD Dec 16 03:16:26.498000 audit: BPF prog-id=75 op=UNLOAD Dec 16 03:16:26.498000 audit: BPF prog-id=125 op=LOAD Dec 16 03:16:26.498000 audit: BPF prog-id=126 op=LOAD Dec 16 03:16:26.498000 audit: BPF prog-id=70 op=UNLOAD Dec 16 03:16:26.498000 audit: BPF prog-id=71 op=UNLOAD Dec 16 03:16:26.499000 audit: BPF prog-id=127 op=LOAD Dec 16 03:16:26.499000 audit: BPF prog-id=72 op=UNLOAD Dec 16 03:16:26.499000 audit: BPF prog-id=128 op=LOAD Dec 16 03:16:26.499000 audit: BPF prog-id=129 op=LOAD Dec 16 03:16:26.499000 audit: BPF prog-id=73 op=UNLOAD Dec 16 03:16:26.499000 audit: BPF prog-id=74 op=UNLOAD Dec 16 03:16:26.500000 audit: BPF prog-id=130 op=LOAD Dec 16 03:16:26.500000 audit: BPF prog-id=79 op=UNLOAD Dec 16 03:16:26.500000 audit: BPF prog-id=131 op=LOAD Dec 16 03:16:26.501000 audit: BPF prog-id=132 op=LOAD Dec 16 03:16:26.501000 audit: BPF prog-id=80 op=UNLOAD Dec 16 03:16:26.501000 audit: BPF prog-id=81 op=UNLOAD Dec 16 03:16:26.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:26.623893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:16:26.626909 (kubelet)[2816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:16:26.678251 kubelet[2816]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:16:26.680159 kubelet[2816]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:16:26.680216 kubelet[2816]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:16:26.680467 kubelet[2816]: I1216 03:16:26.680423 2816 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:16:26.686128 kubelet[2816]: I1216 03:16:26.686093 2816 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 03:16:26.686128 kubelet[2816]: I1216 03:16:26.686116 2816 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:16:26.686351 kubelet[2816]: I1216 03:16:26.686327 2816 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 03:16:26.687512 kubelet[2816]: I1216 03:16:26.687494 2816 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 03:16:26.696363 kubelet[2816]: I1216 03:16:26.695708 2816 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:16:26.700626 kubelet[2816]: I1216 03:16:26.700615 2816 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:16:26.703940 kubelet[2816]: I1216 03:16:26.703927 2816 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:16:26.704166 kubelet[2816]: I1216 03:16:26.704146 2816 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:16:26.704434 kubelet[2816]: I1216 03:16:26.704216 2816 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-6-1137cb7bd3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:16:26.704539 kubelet[2816]: I1216 03:16:26.704529 2816 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:16:26.704578 kubelet[2816]: I1216 03:16:26.704573 2816 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 03:16:26.704651 kubelet[2816]: I1216 03:16:26.704644 2816 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:16:26.704878 kubelet[2816]: I1216 03:16:26.704855 2816 kubelet.go:446] "Attempting to sync node with API server" Dec 16 03:16:26.704969 kubelet[2816]: I1216 03:16:26.704959 2816 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:16:26.705027 kubelet[2816]: I1216 03:16:26.705021 2816 kubelet.go:352] "Adding apiserver pod source" Dec 16 03:16:26.706198 kubelet[2816]: I1216 03:16:26.706171 2816 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:16:26.708396 kubelet[2816]: I1216 03:16:26.708369 2816 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:16:26.708887 kubelet[2816]: I1216 03:16:26.708866 2816 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 03:16:26.709441 kubelet[2816]: I1216 03:16:26.709418 2816 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:16:26.709495 kubelet[2816]: I1216 03:16:26.709469 2816 server.go:1287] "Started kubelet" Dec 16 03:16:26.712244 kubelet[2816]: I1216 03:16:26.711803 2816 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:16:26.718072 kubelet[2816]: I1216 03:16:26.717528 2816 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:16:26.719771 kubelet[2816]: I1216 03:16:26.718579 2816 server.go:479] "Adding debug handlers to kubelet server" Dec 16 03:16:26.719771 kubelet[2816]: I1216 03:16:26.719415 2816 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:16:26.719771 kubelet[2816]: I1216 03:16:26.719602 2816 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:16:26.719933 kubelet[2816]: I1216 03:16:26.719812 2816 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:16:26.721433 kubelet[2816]: I1216 03:16:26.721415 2816 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:16:26.721713 kubelet[2816]: E1216 03:16:26.721687 2816 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-6-1137cb7bd3\" not found" Dec 16 03:16:26.725570 kubelet[2816]: I1216 03:16:26.725502 2816 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:16:26.725638 kubelet[2816]: I1216 03:16:26.725623 2816 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:16:26.725930 kubelet[2816]: I1216 03:16:26.725909 2816 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:16:26.728719 kubelet[2816]: I1216 03:16:26.728687 2816 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 03:16:26.733768 kubelet[2816]: I1216 03:16:26.733728 2816 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 03:16:26.733768 kubelet[2816]: I1216 03:16:26.733769 2816 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 03:16:26.733840 kubelet[2816]: I1216 03:16:26.733783 2816 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:16:26.733840 kubelet[2816]: I1216 03:16:26.733789 2816 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 03:16:26.733840 kubelet[2816]: E1216 03:16:26.733822 2816 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:16:26.739547 kubelet[2816]: I1216 03:16:26.739523 2816 factory.go:221] Registration of the containerd container factory successfully Dec 16 03:16:26.739547 kubelet[2816]: I1216 03:16:26.739541 2816 factory.go:221] Registration of the systemd container factory successfully Dec 16 03:16:26.787642 kubelet[2816]: I1216 03:16:26.787621 2816 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:16:26.787831 kubelet[2816]: I1216 03:16:26.787819 2816 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:16:26.787894 kubelet[2816]: I1216 03:16:26.787887 2816 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:16:26.788067 kubelet[2816]: I1216 03:16:26.788055 2816 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 03:16:26.788124 kubelet[2816]: I1216 03:16:26.788106 2816 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 03:16:26.788165 kubelet[2816]: I1216 03:16:26.788159 2816 policy_none.go:49] "None policy: Start" Dec 16 03:16:26.788204 kubelet[2816]: I1216 03:16:26.788198 2816 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:16:26.788239 kubelet[2816]: I1216 03:16:26.788234 2816 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:16:26.788403 kubelet[2816]: I1216 03:16:26.788394 2816 state_mem.go:75] "Updated machine memory state" Dec 16 03:16:26.792026 kubelet[2816]: I1216 03:16:26.791991 2816 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 03:16:26.792168 kubelet[2816]: I1216 03:16:26.792145 2816 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:16:26.792198 kubelet[2816]: I1216 03:16:26.792164 2816 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:16:26.792362 kubelet[2816]: I1216 03:16:26.792342 2816 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:16:26.794319 kubelet[2816]: E1216 03:16:26.794253 2816 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:16:26.834592 kubelet[2816]: I1216 03:16:26.834550 2816 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:26.835773 kubelet[2816]: I1216 03:16:26.835685 2816 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:26.835773 kubelet[2816]: I1216 03:16:26.835745 2816 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:26.842780 kubelet[2816]: E1216 03:16:26.842727 2816 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-6-1137cb7bd3\" already exists" pod="kube-system/kube-apiserver-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:26.843636 kubelet[2816]: E1216 03:16:26.843437 2816 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-6-1137cb7bd3\" already exists" pod="kube-system/kube-scheduler-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:26.843767 kubelet[2816]: E1216 03:16:26.843725 2816 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547-0-0-6-1137cb7bd3\" already exists" pod="kube-system/kube-controller-manager-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:26.896643 kubelet[2816]: I1216 03:16:26.896417 2816 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:26.910420 kubelet[2816]: I1216 03:16:26.910386 2816 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:26.910541 kubelet[2816]: I1216 03:16:26.910493 2816 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:27.026129 kubelet[2816]: I1216 03:16:27.026083 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffdd102986f65bccb1603f02203a19ea-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-6-1137cb7bd3\" (UID: \"ffdd102986f65bccb1603f02203a19ea\") " pod="kube-system/kube-apiserver-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:27.026129 kubelet[2816]: I1216 03:16:27.026131 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffdd102986f65bccb1603f02203a19ea-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-6-1137cb7bd3\" (UID: \"ffdd102986f65bccb1603f02203a19ea\") " pod="kube-system/kube-apiserver-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:27.026306 kubelet[2816]: I1216 03:16:27.026156 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a7966c67a25a133a6fe97cc36798353-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-6-1137cb7bd3\" (UID: \"6a7966c67a25a133a6fe97cc36798353\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:27.026306 kubelet[2816]: I1216 03:16:27.026186 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6a7966c67a25a133a6fe97cc36798353-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-6-1137cb7bd3\" (UID: \"6a7966c67a25a133a6fe97cc36798353\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:27.026306 kubelet[2816]: I1216 03:16:27.026202 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a7966c67a25a133a6fe97cc36798353-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-6-1137cb7bd3\" (UID: \"6a7966c67a25a133a6fe97cc36798353\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:27.026306 kubelet[2816]: I1216 03:16:27.026216 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a7966c67a25a133a6fe97cc36798353-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-6-1137cb7bd3\" (UID: \"6a7966c67a25a133a6fe97cc36798353\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:27.026306 kubelet[2816]: I1216 03:16:27.026232 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a7966c67a25a133a6fe97cc36798353-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-6-1137cb7bd3\" (UID: \"6a7966c67a25a133a6fe97cc36798353\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:27.026413 kubelet[2816]: I1216 03:16:27.026248 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd737292ee6c34a65b3fc87526147c96-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-6-1137cb7bd3\" (UID: \"bd737292ee6c34a65b3fc87526147c96\") " pod="kube-system/kube-scheduler-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:27.026413 kubelet[2816]: I1216 03:16:27.026281 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffdd102986f65bccb1603f02203a19ea-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-6-1137cb7bd3\" (UID: \"ffdd102986f65bccb1603f02203a19ea\") " pod="kube-system/kube-apiserver-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:27.707820 kubelet[2816]: I1216 03:16:27.707723 2816 apiserver.go:52] "Watching apiserver" Dec 16 03:16:27.726035 kubelet[2816]: I1216 03:16:27.725960 2816 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:16:27.765106 kubelet[2816]: I1216 03:16:27.764660 2816 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547-0-0-6-1137cb7bd3" podStartSLOduration=2.7646431 podStartE2EDuration="2.7646431s" podCreationTimestamp="2025-12-16 03:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:16:27.764387737 +0000 UTC m=+1.133278464" watchObservedRunningTime="2025-12-16 03:16:27.7646431 +0000 UTC m=+1.133533818" Dec 16 03:16:27.774940 kubelet[2816]: I1216 03:16:27.774898 2816 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:27.783315 kubelet[2816]: E1216 03:16:27.783223 2816 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-6-1137cb7bd3\" already exists" pod="kube-system/kube-apiserver-ci-4547-0-0-6-1137cb7bd3" Dec 16 03:16:27.790085 kubelet[2816]: I1216 03:16:27.789803 2816 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547-0-0-6-1137cb7bd3" podStartSLOduration=2.789780243 podStartE2EDuration="2.789780243s" podCreationTimestamp="2025-12-16 03:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:16:27.779146839 +0000 UTC m=+1.148037557" watchObservedRunningTime="2025-12-16 03:16:27.789780243 +0000 UTC m=+1.158670981" Dec 16 03:16:27.790085 kubelet[2816]: I1216 03:16:27.789983 2816 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547-0-0-6-1137cb7bd3" podStartSLOduration=2.789974882 podStartE2EDuration="2.789974882s" podCreationTimestamp="2025-12-16 03:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:16:27.789764461 +0000 UTC m=+1.158655179" watchObservedRunningTime="2025-12-16 03:16:27.789974882 +0000 UTC m=+1.158865620" Dec 16 03:16:31.219905 update_engine[1605]: I20251216 03:16:31.219812 1605 update_attempter.cc:509] Updating boot flags... Dec 16 03:16:32.094648 kubelet[2816]: I1216 03:16:32.094500 2816 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 03:16:32.095015 containerd[1630]: time="2025-12-16T03:16:32.094882931Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 03:16:32.095222 kubelet[2816]: I1216 03:16:32.095166 2816 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 03:16:32.743033 systemd[1]: Created slice kubepods-besteffort-pod5cbf4626_66f2_463f_a066_0da72079da21.slice - libcontainer container kubepods-besteffort-pod5cbf4626_66f2_463f_a066_0da72079da21.slice. Dec 16 03:16:32.765165 kubelet[2816]: I1216 03:16:32.765107 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5cbf4626-66f2-463f-a066-0da72079da21-kube-proxy\") pod \"kube-proxy-b4mgh\" (UID: \"5cbf4626-66f2-463f-a066-0da72079da21\") " pod="kube-system/kube-proxy-b4mgh" Dec 16 03:16:32.765165 kubelet[2816]: I1216 03:16:32.765165 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cbf4626-66f2-463f-a066-0da72079da21-xtables-lock\") pod \"kube-proxy-b4mgh\" (UID: \"5cbf4626-66f2-463f-a066-0da72079da21\") " pod="kube-system/kube-proxy-b4mgh" Dec 16 03:16:32.765392 kubelet[2816]: I1216 03:16:32.765184 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cbf4626-66f2-463f-a066-0da72079da21-lib-modules\") pod \"kube-proxy-b4mgh\" (UID: \"5cbf4626-66f2-463f-a066-0da72079da21\") " pod="kube-system/kube-proxy-b4mgh" Dec 16 03:16:32.765392 kubelet[2816]: I1216 03:16:32.765199 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w7f7\" (UniqueName: \"kubernetes.io/projected/5cbf4626-66f2-463f-a066-0da72079da21-kube-api-access-2w7f7\") pod \"kube-proxy-b4mgh\" (UID: \"5cbf4626-66f2-463f-a066-0da72079da21\") " pod="kube-system/kube-proxy-b4mgh" Dec 16 03:16:32.874637 kubelet[2816]: E1216 03:16:32.874560 2816 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 03:16:32.874637 kubelet[2816]: E1216 03:16:32.874601 2816 projected.go:194] Error preparing data for projected volume kube-api-access-2w7f7 for pod kube-system/kube-proxy-b4mgh: configmap "kube-root-ca.crt" not found Dec 16 03:16:32.874934 kubelet[2816]: E1216 03:16:32.874669 2816 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5cbf4626-66f2-463f-a066-0da72079da21-kube-api-access-2w7f7 podName:5cbf4626-66f2-463f-a066-0da72079da21 nodeName:}" failed. No retries permitted until 2025-12-16 03:16:33.374650307 +0000 UTC m=+6.743541025 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2w7f7" (UniqueName: "kubernetes.io/projected/5cbf4626-66f2-463f-a066-0da72079da21-kube-api-access-2w7f7") pod "kube-proxy-b4mgh" (UID: "5cbf4626-66f2-463f-a066-0da72079da21") : configmap "kube-root-ca.crt" not found Dec 16 03:16:33.240853 systemd[1]: Created slice kubepods-besteffort-pod3c7f6c52_617f_4f79_85f5_ad7eada65a28.slice - libcontainer container kubepods-besteffort-pod3c7f6c52_617f_4f79_85f5_ad7eada65a28.slice. Dec 16 03:16:33.245514 kubelet[2816]: I1216 03:16:33.245449 2816 status_manager.go:890] "Failed to get status for pod" podUID="3c7f6c52-617f-4f79-85f5-ad7eada65a28" pod="tigera-operator/tigera-operator-7dcd859c48-87cs8" err="pods \"tigera-operator-7dcd859c48-87cs8\" is forbidden: User \"system:node:ci-4547-0-0-6-1137cb7bd3\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4547-0-0-6-1137cb7bd3' and this object" Dec 16 03:16:33.245931 kubelet[2816]: W1216 03:16:33.245915 2816 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4547-0-0-6-1137cb7bd3" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4547-0-0-6-1137cb7bd3' and this object Dec 16 03:16:33.246039 kubelet[2816]: E1216 03:16:33.246012 2816 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4547-0-0-6-1137cb7bd3\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4547-0-0-6-1137cb7bd3' and this object" logger="UnhandledError" Dec 16 03:16:33.269091 kubelet[2816]: I1216 03:16:33.268989 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnklp\" (UniqueName: \"kubernetes.io/projected/3c7f6c52-617f-4f79-85f5-ad7eada65a28-kube-api-access-wnklp\") pod \"tigera-operator-7dcd859c48-87cs8\" (UID: \"3c7f6c52-617f-4f79-85f5-ad7eada65a28\") " pod="tigera-operator/tigera-operator-7dcd859c48-87cs8" Dec 16 03:16:33.269091 kubelet[2816]: I1216 03:16:33.269044 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c7f6c52-617f-4f79-85f5-ad7eada65a28-var-lib-calico\") pod \"tigera-operator-7dcd859c48-87cs8\" (UID: \"3c7f6c52-617f-4f79-85f5-ad7eada65a28\") " pod="tigera-operator/tigera-operator-7dcd859c48-87cs8" Dec 16 03:16:33.654142 containerd[1630]: time="2025-12-16T03:16:33.653845164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b4mgh,Uid:5cbf4626-66f2-463f-a066-0da72079da21,Namespace:kube-system,Attempt:0,}" Dec 16 03:16:33.680505 containerd[1630]: time="2025-12-16T03:16:33.680448455Z" level=info msg="connecting to shim 25ed9519b3bdb9d817bde223fadbe17876ae55fd4267a77c4b5279780dddac78" address="unix:///run/containerd/s/834e73d64becda2ec917f13a95b4ec12ed40c02b24b20c9024cf592f22bfb923" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:16:33.705986 systemd[1]: Started cri-containerd-25ed9519b3bdb9d817bde223fadbe17876ae55fd4267a77c4b5279780dddac78.scope - libcontainer container 25ed9519b3bdb9d817bde223fadbe17876ae55fd4267a77c4b5279780dddac78. Dec 16 03:16:33.713000 audit: BPF prog-id=133 op=LOAD Dec 16 03:16:33.716143 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 03:16:33.716192 kernel: audit: type=1334 audit(1765854993.713:443): prog-id=133 op=LOAD Dec 16 03:16:33.721531 kernel: audit: type=1334 audit(1765854993.715:444): prog-id=134 op=LOAD Dec 16 03:16:33.715000 audit: BPF prog-id=134 op=LOAD Dec 16 03:16:33.729197 kernel: audit: type=1300 audit(1765854993.715:444): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2892 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:33.715000 audit[2902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2892 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:33.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656439353139623362646239643831376264653232336661646265 Dec 16 03:16:33.715000 audit: BPF prog-id=134 op=UNLOAD Dec 16 03:16:33.738196 kernel: audit: type=1327 audit(1765854993.715:444): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656439353139623362646239643831376264653232336661646265 Dec 16 03:16:33.738242 kernel: audit: type=1334 audit(1765854993.715:445): prog-id=134 op=UNLOAD Dec 16 03:16:33.715000 audit[2902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2892 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:33.741423 kernel: audit: type=1300 audit(1765854993.715:445): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2892 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:33.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656439353139623362646239643831376264653232336661646265 Dec 16 03:16:33.748459 kernel: audit: type=1327 audit(1765854993.715:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656439353139623362646239643831376264653232336661646265 Dec 16 03:16:33.715000 audit: BPF prog-id=135 op=LOAD Dec 16 03:16:33.756111 kernel: audit: type=1334 audit(1765854993.715:446): prog-id=135 op=LOAD Dec 16 03:16:33.756155 kernel: audit: type=1300 audit(1765854993.715:446): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2892 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:33.715000 audit[2902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2892 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:33.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656439353139623362646239643831376264653232336661646265 Dec 16 03:16:33.764377 kernel: audit: type=1327 audit(1765854993.715:446): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656439353139623362646239643831376264653232336661646265 Dec 16 03:16:33.767924 containerd[1630]: time="2025-12-16T03:16:33.767897136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b4mgh,Uid:5cbf4626-66f2-463f-a066-0da72079da21,Namespace:kube-system,Attempt:0,} returns sandbox id \"25ed9519b3bdb9d817bde223fadbe17876ae55fd4267a77c4b5279780dddac78\"" Dec 16 03:16:33.715000 audit: BPF prog-id=136 op=LOAD Dec 16 03:16:33.715000 audit[2902]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2892 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:33.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656439353139623362646239643831376264653232336661646265 Dec 16 03:16:33.715000 audit: BPF prog-id=136 op=UNLOAD Dec 16 03:16:33.715000 audit[2902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2892 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:33.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656439353139623362646239643831376264653232336661646265 Dec 16 03:16:33.715000 audit: BPF prog-id=135 op=UNLOAD Dec 16 03:16:33.715000 audit[2902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2892 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:33.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656439353139623362646239643831376264653232336661646265 Dec 16 03:16:33.715000 audit: BPF prog-id=137 op=LOAD Dec 16 03:16:33.715000 audit[2902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2892 pid=2902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:33.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235656439353139623362646239643831376264653232336661646265 Dec 16 03:16:33.772460 containerd[1630]: time="2025-12-16T03:16:33.772438989Z" level=info msg="CreateContainer within sandbox \"25ed9519b3bdb9d817bde223fadbe17876ae55fd4267a77c4b5279780dddac78\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 03:16:33.788326 containerd[1630]: time="2025-12-16T03:16:33.788225367Z" level=info msg="Container ceb4c2bca4d5790b15cb7f6fc753baa5414acafc3f7462486b5f0f2214bd61b0: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:16:33.801800 containerd[1630]: time="2025-12-16T03:16:33.801747656Z" level=info msg="CreateContainer within sandbox \"25ed9519b3bdb9d817bde223fadbe17876ae55fd4267a77c4b5279780dddac78\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ceb4c2bca4d5790b15cb7f6fc753baa5414acafc3f7462486b5f0f2214bd61b0\"" Dec 16 03:16:33.803225 containerd[1630]: time="2025-12-16T03:16:33.803161385Z" level=info msg="StartContainer for \"ceb4c2bca4d5790b15cb7f6fc753baa5414acafc3f7462486b5f0f2214bd61b0\"" Dec 16 03:16:33.805119 containerd[1630]: time="2025-12-16T03:16:33.805096284Z" level=info msg="connecting to shim ceb4c2bca4d5790b15cb7f6fc753baa5414acafc3f7462486b5f0f2214bd61b0" address="unix:///run/containerd/s/834e73d64becda2ec917f13a95b4ec12ed40c02b24b20c9024cf592f22bfb923" protocol=ttrpc version=3 Dec 16 03:16:33.821938 systemd[1]: Started cri-containerd-ceb4c2bca4d5790b15cb7f6fc753baa5414acafc3f7462486b5f0f2214bd61b0.scope - libcontainer container ceb4c2bca4d5790b15cb7f6fc753baa5414acafc3f7462486b5f0f2214bd61b0. Dec 16 03:16:33.858000 audit: BPF prog-id=138 op=LOAD Dec 16 03:16:33.858000 audit[2929]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2892 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:33.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365623463326263613464353739306231356362376636666337353362 Dec 16 03:16:33.859000 audit: BPF prog-id=139 op=LOAD Dec 16 03:16:33.859000 audit[2929]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2892 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:33.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365623463326263613464353739306231356362376636666337353362 Dec 16 03:16:33.859000 audit: BPF prog-id=139 op=UNLOAD Dec 16 03:16:33.859000 audit[2929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2892 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:33.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365623463326263613464353739306231356362376636666337353362 Dec 16 03:16:33.859000 audit: BPF prog-id=138 op=UNLOAD Dec 16 03:16:33.859000 audit[2929]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2892 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:33.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365623463326263613464353739306231356362376636666337353362 Dec 16 03:16:33.859000 audit: BPF prog-id=140 op=LOAD Dec 16 03:16:33.859000 audit[2929]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2892 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:33.859000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365623463326263613464353739306231356362376636666337353362 Dec 16 03:16:33.879476 containerd[1630]: time="2025-12-16T03:16:33.879355756Z" level=info msg="StartContainer for \"ceb4c2bca4d5790b15cb7f6fc753baa5414acafc3f7462486b5f0f2214bd61b0\" returns successfully" Dec 16 03:16:34.173000 audit[2993]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=2993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.173000 audit[2994]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.173000 audit[2994]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe89734120 a2=0 a3=3bebb2cf645d3e92 items=0 ppid=2942 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:16:34.173000 audit[2993]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7fc8b2b0 a2=0 a3=7fff7fc8b29c items=0 ppid=2942 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.173000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:16:34.174000 audit[2995]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=2995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.174000 audit[2995]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd73fc98f0 a2=0 a3=7ffd73fc98dc items=0 ppid=2942 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.174000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:16:34.174000 audit[2996]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=2996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.174000 audit[2996]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd83f80390 a2=0 a3=7ffd83f8037c items=0 ppid=2942 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.174000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:16:34.175000 audit[2997]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=2997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.175000 audit[2997]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc850aac20 a2=0 a3=7ffc850aac0c items=0 ppid=2942 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.175000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:16:34.176000 audit[2998]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.176000 audit[2998]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2d591540 a2=0 a3=7ffc2d59152c items=0 ppid=2942 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.176000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:16:34.283000 audit[2999]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.283000 audit[2999]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe0aefd810 a2=0 a3=7ffe0aefd7fc items=0 ppid=2942 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.283000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:16:34.287000 audit[3001]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.287000 audit[3001]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdfe7fa400 a2=0 a3=7ffdfe7fa3ec items=0 ppid=2942 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.287000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 03:16:34.290000 audit[3004]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.290000 audit[3004]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe0f2ff040 a2=0 a3=7ffe0f2ff02c items=0 ppid=2942 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.290000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 03:16:34.291000 audit[3005]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.291000 audit[3005]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5cabdf70 a2=0 a3=7fff5cabdf5c items=0 ppid=2942 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.291000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:16:34.294000 audit[3007]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.294000 audit[3007]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1a64b810 a2=0 a3=7fff1a64b7fc items=0 ppid=2942 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.294000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:16:34.295000 audit[3008]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.295000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd41e4bec0 a2=0 a3=7ffd41e4beac items=0 ppid=2942 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.295000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:16:34.297000 audit[3010]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.297000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff3b236fb0 a2=0 a3=7fff3b236f9c items=0 ppid=2942 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:16:34.300000 audit[3013]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.300000 audit[3013]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc17f98ff0 a2=0 a3=7ffc17f98fdc items=0 ppid=2942 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.300000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 03:16:34.303000 audit[3014]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.303000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd96a501c0 a2=0 a3=7ffd96a501ac items=0 ppid=2942 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.303000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:16:34.306000 audit[3016]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.306000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe7df7d960 a2=0 a3=7ffe7df7d94c items=0 ppid=2942 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.306000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:16:34.307000 audit[3017]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.307000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd63be4f10 a2=0 a3=7ffd63be4efc items=0 ppid=2942 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.307000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:16:34.309000 audit[3019]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.309000 audit[3019]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd99c9a8b0 a2=0 a3=7ffd99c9a89c items=0 ppid=2942 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.309000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:16:34.313000 audit[3022]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.313000 audit[3022]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe40a20500 a2=0 a3=7ffe40a204ec items=0 ppid=2942 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.313000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:16:34.316000 audit[3025]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.316000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc9c3707a0 a2=0 a3=7ffc9c37078c items=0 ppid=2942 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.316000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:16:34.317000 audit[3026]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.317000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffffd8c1c40 a2=0 a3=7ffffd8c1c2c items=0 ppid=2942 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.317000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:16:34.320000 audit[3028]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.320000 audit[3028]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff4d7d7db0 a2=0 a3=7fff4d7d7d9c items=0 ppid=2942 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.320000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:16:34.323000 audit[3031]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.323000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe7a245660 a2=0 a3=7ffe7a24564c items=0 ppid=2942 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.323000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:16:34.324000 audit[3032]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.324000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdaa87d240 a2=0 a3=7ffdaa87d22c items=0 ppid=2942 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.324000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:16:34.327000 audit[3034]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:16:34.327000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffce00d2d20 a2=0 a3=7ffce00d2d0c items=0 ppid=2942 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.327000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:16:34.348000 audit[3040]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:34.348000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc77bf6a00 a2=0 a3=7ffc77bf69ec items=0 ppid=2942 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:34.360000 audit[3040]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:34.360000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc77bf6a00 a2=0 a3=7ffc77bf69ec items=0 ppid=2942 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.360000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:34.361000 audit[3045]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.361000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc1ba63740 a2=0 a3=7ffc1ba6372c items=0 ppid=2942 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.361000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:16:34.364000 audit[3047]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.364000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffb98e14f0 a2=0 a3=7fffb98e14dc items=0 ppid=2942 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.364000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 03:16:34.367000 audit[3050]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.367000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc755dac00 a2=0 a3=7ffc755dabec items=0 ppid=2942 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.367000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 03:16:34.368000 audit[3051]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.368000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffec015e20 a2=0 a3=7fffec015e0c items=0 ppid=2942 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.368000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:16:34.372000 audit[3053]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.372000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc35e2740 a2=0 a3=7fffc35e272c items=0 ppid=2942 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.372000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:16:34.373000 audit[3054]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.373000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd369c0990 a2=0 a3=7ffd369c097c items=0 ppid=2942 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.373000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:16:34.375000 audit[3056]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.375000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe34aebc00 a2=0 a3=7ffe34aebbec items=0 ppid=2942 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.375000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 03:16:34.376886 kubelet[2816]: E1216 03:16:34.376394 2816 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 16 03:16:34.376886 kubelet[2816]: E1216 03:16:34.376422 2816 projected.go:194] Error preparing data for projected volume kube-api-access-wnklp for pod tigera-operator/tigera-operator-7dcd859c48-87cs8: failed to sync configmap cache: timed out waiting for the condition Dec 16 03:16:34.376886 kubelet[2816]: E1216 03:16:34.376469 2816 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3c7f6c52-617f-4f79-85f5-ad7eada65a28-kube-api-access-wnklp podName:3c7f6c52-617f-4f79-85f5-ad7eada65a28 nodeName:}" failed. No retries permitted until 2025-12-16 03:16:34.876451924 +0000 UTC m=+8.245342642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wnklp" (UniqueName: "kubernetes.io/projected/3c7f6c52-617f-4f79-85f5-ad7eada65a28-kube-api-access-wnklp") pod "tigera-operator-7dcd859c48-87cs8" (UID: "3c7f6c52-617f-4f79-85f5-ad7eada65a28") : failed to sync configmap cache: timed out waiting for the condition Dec 16 03:16:34.380000 audit[3059]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3059 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.380000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdd97f9db0 a2=0 a3=7ffdd97f9d9c items=0 ppid=2942 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:16:34.381000 audit[3060]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.381000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd25c9bbf0 a2=0 a3=7ffd25c9bbdc items=0 ppid=2942 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.381000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:16:34.383000 audit[3062]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.383000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffca3ea25f0 a2=0 a3=7ffca3ea25dc items=0 ppid=2942 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.383000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:16:34.384000 audit[3063]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3063 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.384000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc464c9760 a2=0 a3=7ffc464c974c items=0 ppid=2942 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.384000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:16:34.387000 audit[3065]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3065 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.387000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc4e4120d0 a2=0 a3=7ffc4e4120bc items=0 ppid=2942 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.387000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:16:34.390000 audit[3068]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.390000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcfe020a70 a2=0 a3=7ffcfe020a5c items=0 ppid=2942 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.390000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:16:34.393000 audit[3071]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.393000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc99dd8f00 a2=0 a3=7ffc99dd8eec items=0 ppid=2942 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.393000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 03:16:34.394000 audit[3072]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.394000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffecb59a7b0 a2=0 a3=7ffecb59a79c items=0 ppid=2942 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.394000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:16:34.397000 audit[3074]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.397000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc3532fab0 a2=0 a3=7ffc3532fa9c items=0 ppid=2942 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.397000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:16:34.400000 audit[3077]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.400000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe1c474960 a2=0 a3=7ffe1c47494c items=0 ppid=2942 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.400000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:16:34.401000 audit[3078]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.401000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0f7bfc90 a2=0 a3=7ffe0f7bfc7c items=0 ppid=2942 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.401000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:16:34.404000 audit[3080]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.404000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff89571d20 a2=0 a3=7fff89571d0c items=0 ppid=2942 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.404000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:16:34.405000 audit[3081]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.405000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff930ce170 a2=0 a3=7fff930ce15c items=0 ppid=2942 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.405000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:16:34.407000 audit[3083]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.407000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe85233190 a2=0 a3=7ffe8523317c items=0 ppid=2942 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.407000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:16:34.411000 audit[3086]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:16:34.411000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc4a93b7d0 a2=0 a3=7ffc4a93b7bc items=0 ppid=2942 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.411000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:16:34.413000 audit[3088]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:16:34.413000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff4ca26760 a2=0 a3=7fff4ca2674c items=0 ppid=2942 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.413000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:34.414000 audit[3088]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:16:34.414000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff4ca26760 a2=0 a3=7fff4ca2674c items=0 ppid=2942 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:34.414000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:34.476512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2876983526.mount: Deactivated successfully. Dec 16 03:16:34.805452 kubelet[2816]: I1216 03:16:34.805195 2816 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b4mgh" podStartSLOduration=2.80517539 podStartE2EDuration="2.80517539s" podCreationTimestamp="2025-12-16 03:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:16:34.804352981 +0000 UTC m=+8.173243699" watchObservedRunningTime="2025-12-16 03:16:34.80517539 +0000 UTC m=+8.174066119" Dec 16 03:16:35.044733 containerd[1630]: time="2025-12-16T03:16:35.044694231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-87cs8,Uid:3c7f6c52-617f-4f79-85f5-ad7eada65a28,Namespace:tigera-operator,Attempt:0,}" Dec 16 03:16:35.064524 containerd[1630]: time="2025-12-16T03:16:35.063899860Z" level=info msg="connecting to shim c454604b4aa47755bb95ce37a7d61e585cf9104e3a224798569a41c09b093c9b" address="unix:///run/containerd/s/c9467190338efe3aa9a7f4cdd250bca1e0fdad25bd6acb7c0b2b0b13982666df" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:16:35.090922 systemd[1]: Started cri-containerd-c454604b4aa47755bb95ce37a7d61e585cf9104e3a224798569a41c09b093c9b.scope - libcontainer container c454604b4aa47755bb95ce37a7d61e585cf9104e3a224798569a41c09b093c9b. Dec 16 03:16:35.100000 audit: BPF prog-id=141 op=LOAD Dec 16 03:16:35.100000 audit: BPF prog-id=142 op=LOAD Dec 16 03:16:35.100000 audit[3109]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3098 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:35.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334353436303462346161343737353562623935636533376137643631 Dec 16 03:16:35.101000 audit: BPF prog-id=142 op=UNLOAD Dec 16 03:16:35.101000 audit[3109]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3098 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:35.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334353436303462346161343737353562623935636533376137643631 Dec 16 03:16:35.101000 audit: BPF prog-id=143 op=LOAD Dec 16 03:16:35.101000 audit[3109]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3098 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:35.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334353436303462346161343737353562623935636533376137643631 Dec 16 03:16:35.101000 audit: BPF prog-id=144 op=LOAD Dec 16 03:16:35.101000 audit[3109]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3098 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:35.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334353436303462346161343737353562623935636533376137643631 Dec 16 03:16:35.101000 audit: BPF prog-id=144 op=UNLOAD Dec 16 03:16:35.101000 audit[3109]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3098 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:35.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334353436303462346161343737353562623935636533376137643631 Dec 16 03:16:35.101000 audit: BPF prog-id=143 op=UNLOAD Dec 16 03:16:35.101000 audit[3109]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3098 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:35.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334353436303462346161343737353562623935636533376137643631 Dec 16 03:16:35.101000 audit: BPF prog-id=145 op=LOAD Dec 16 03:16:35.101000 audit[3109]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3098 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:35.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334353436303462346161343737353562623935636533376137643631 Dec 16 03:16:35.141227 containerd[1630]: time="2025-12-16T03:16:35.141083905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-87cs8,Uid:3c7f6c52-617f-4f79-85f5-ad7eada65a28,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c454604b4aa47755bb95ce37a7d61e585cf9104e3a224798569a41c09b093c9b\"" Dec 16 03:16:35.142936 containerd[1630]: time="2025-12-16T03:16:35.142910187Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 03:16:37.923337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount37858191.mount: Deactivated successfully. Dec 16 03:16:38.610096 containerd[1630]: time="2025-12-16T03:16:38.610045430Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:38.611038 containerd[1630]: time="2025-12-16T03:16:38.611007612Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 03:16:38.612352 containerd[1630]: time="2025-12-16T03:16:38.612323397Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:38.615163 containerd[1630]: time="2025-12-16T03:16:38.614280519Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:38.615163 containerd[1630]: time="2025-12-16T03:16:38.614962242Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.472023928s" Dec 16 03:16:38.615163 containerd[1630]: time="2025-12-16T03:16:38.614982380Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 03:16:38.619103 containerd[1630]: time="2025-12-16T03:16:38.619059493Z" level=info msg="CreateContainer within sandbox \"c454604b4aa47755bb95ce37a7d61e585cf9104e3a224798569a41c09b093c9b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 03:16:38.627034 containerd[1630]: time="2025-12-16T03:16:38.626995517Z" level=info msg="Container 156b3a912d272fac9bd74ccd31a0fc1a65c77f812a7f5edf1aa5fbe9339a0ebf: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:16:38.633658 containerd[1630]: time="2025-12-16T03:16:38.633604871Z" level=info msg="CreateContainer within sandbox \"c454604b4aa47755bb95ce37a7d61e585cf9104e3a224798569a41c09b093c9b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"156b3a912d272fac9bd74ccd31a0fc1a65c77f812a7f5edf1aa5fbe9339a0ebf\"" Dec 16 03:16:38.634198 containerd[1630]: time="2025-12-16T03:16:38.634140107Z" level=info msg="StartContainer for \"156b3a912d272fac9bd74ccd31a0fc1a65c77f812a7f5edf1aa5fbe9339a0ebf\"" Dec 16 03:16:38.635080 containerd[1630]: time="2025-12-16T03:16:38.635063296Z" level=info msg="connecting to shim 156b3a912d272fac9bd74ccd31a0fc1a65c77f812a7f5edf1aa5fbe9339a0ebf" address="unix:///run/containerd/s/c9467190338efe3aa9a7f4cdd250bca1e0fdad25bd6acb7c0b2b0b13982666df" protocol=ttrpc version=3 Dec 16 03:16:38.652914 systemd[1]: Started cri-containerd-156b3a912d272fac9bd74ccd31a0fc1a65c77f812a7f5edf1aa5fbe9339a0ebf.scope - libcontainer container 156b3a912d272fac9bd74ccd31a0fc1a65c77f812a7f5edf1aa5fbe9339a0ebf. Dec 16 03:16:38.661000 audit: BPF prog-id=146 op=LOAD Dec 16 03:16:38.663000 audit: BPF prog-id=147 op=LOAD Dec 16 03:16:38.663000 audit[3141]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3098 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:38.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135366233613931326432373266616339626437346363643331613066 Dec 16 03:16:38.663000 audit: BPF prog-id=147 op=UNLOAD Dec 16 03:16:38.663000 audit[3141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3098 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:38.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135366233613931326432373266616339626437346363643331613066 Dec 16 03:16:38.663000 audit: BPF prog-id=148 op=LOAD Dec 16 03:16:38.663000 audit[3141]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3098 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:38.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135366233613931326432373266616339626437346363643331613066 Dec 16 03:16:38.663000 audit: BPF prog-id=149 op=LOAD Dec 16 03:16:38.663000 audit[3141]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3098 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:38.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135366233613931326432373266616339626437346363643331613066 Dec 16 03:16:38.664000 audit: BPF prog-id=149 op=UNLOAD Dec 16 03:16:38.664000 audit[3141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3098 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:38.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135366233613931326432373266616339626437346363643331613066 Dec 16 03:16:38.664000 audit: BPF prog-id=148 op=UNLOAD Dec 16 03:16:38.664000 audit[3141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3098 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:38.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135366233613931326432373266616339626437346363643331613066 Dec 16 03:16:38.664000 audit: BPF prog-id=150 op=LOAD Dec 16 03:16:38.664000 audit[3141]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3098 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:38.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135366233613931326432373266616339626437346363643331613066 Dec 16 03:16:38.679916 containerd[1630]: time="2025-12-16T03:16:38.679872506Z" level=info msg="StartContainer for \"156b3a912d272fac9bd74ccd31a0fc1a65c77f812a7f5edf1aa5fbe9339a0ebf\" returns successfully" Dec 16 03:16:38.817182 kubelet[2816]: I1216 03:16:38.816996 2816 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-87cs8" podStartSLOduration=2.343552444 podStartE2EDuration="5.816978711s" podCreationTimestamp="2025-12-16 03:16:33 +0000 UTC" firstStartedPulling="2025-12-16 03:16:35.142278263 +0000 UTC m=+8.511168972" lastFinishedPulling="2025-12-16 03:16:38.615704531 +0000 UTC m=+11.984595239" observedRunningTime="2025-12-16 03:16:38.816059802 +0000 UTC m=+12.184950511" watchObservedRunningTime="2025-12-16 03:16:38.816978711 +0000 UTC m=+12.185869429" Dec 16 03:16:44.688265 sudo[1880]: pam_unix(sudo:session): session closed for user root Dec 16 03:16:44.695848 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 03:16:44.696013 kernel: audit: type=1106 audit(1765855004.687:523): pid=1880 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:44.687000 audit[1880]: USER_END pid=1880 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:44.687000 audit[1880]: CRED_DISP pid=1880 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:44.705803 kernel: audit: type=1104 audit(1765855004.687:524): pid=1880 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:16:44.850278 sshd[1879]: Connection closed by 139.178.89.65 port 52990 Dec 16 03:16:44.850735 sshd-session[1875]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:44.852000 audit[1875]: USER_END pid=1875 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:16:44.856040 systemd-logind[1604]: Session 8 logged out. Waiting for processes to exit. Dec 16 03:16:44.857089 systemd[1]: sshd@6-65.108.246.88:22-139.178.89.65:52990.service: Deactivated successfully. Dec 16 03:16:44.860884 kernel: audit: type=1106 audit(1765855004.852:525): pid=1875 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:16:44.860025 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 03:16:44.860239 systemd[1]: session-8.scope: Consumed 4.679s CPU time, 157M memory peak. Dec 16 03:16:44.852000 audit[1875]: CRED_DISP pid=1875 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:16:44.864224 systemd-logind[1604]: Removed session 8. Dec 16 03:16:44.868771 kernel: audit: type=1104 audit(1765855004.852:526): pid=1875 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:16:44.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-65.108.246.88:22-139.178.89.65:52990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:44.874800 kernel: audit: type=1131 audit(1765855004.854:527): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-65.108.246.88:22-139.178.89.65:52990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:45.228000 audit[3220]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3220 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:45.233872 kernel: audit: type=1325 audit(1765855005.228:528): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3220 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:45.228000 audit[3220]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff26b8c5a0 a2=0 a3=7fff26b8c58c items=0 ppid=2942 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:45.244779 kernel: audit: type=1300 audit(1765855005.228:528): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff26b8c5a0 a2=0 a3=7fff26b8c58c items=0 ppid=2942 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:45.228000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:45.250868 kernel: audit: type=1327 audit(1765855005.228:528): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:45.234000 audit[3220]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3220 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:45.262807 kernel: audit: type=1325 audit(1765855005.234:529): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3220 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:45.262877 kernel: audit: type=1300 audit(1765855005.234:529): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff26b8c5a0 a2=0 a3=0 items=0 ppid=2942 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:45.234000 audit[3220]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff26b8c5a0 a2=0 a3=0 items=0 ppid=2942 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:45.234000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:45.268000 audit[3222]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:45.268000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffebd03dc20 a2=0 a3=7ffebd03dc0c items=0 ppid=2942 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:45.268000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:45.275000 audit[3222]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:45.275000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffebd03dc20 a2=0 a3=0 items=0 ppid=2942 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:45.275000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:47.277000 audit[3225]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:47.277000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff3e176e20 a2=0 a3=7fff3e176e0c items=0 ppid=2942 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:47.277000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:47.281000 audit[3225]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:47.281000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff3e176e20 a2=0 a3=0 items=0 ppid=2942 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:47.281000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:47.290000 audit[3227]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:47.290000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff01fcb570 a2=0 a3=7fff01fcb55c items=0 ppid=2942 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:47.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:47.294000 audit[3227]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:47.294000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff01fcb570 a2=0 a3=0 items=0 ppid=2942 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:47.294000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:48.303000 audit[3229]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:48.303000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc122ae1f0 a2=0 a3=7ffc122ae1dc items=0 ppid=2942 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:48.303000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:48.307000 audit[3229]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:48.307000 audit[3229]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc122ae1f0 a2=0 a3=0 items=0 ppid=2942 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:48.307000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:49.078558 systemd[1]: Created slice kubepods-besteffort-podbcaf6246_65b7_4815_9dce_b3c0cc5974b8.slice - libcontainer container kubepods-besteffort-podbcaf6246_65b7_4815_9dce_b3c0cc5974b8.slice. Dec 16 03:16:49.189017 kubelet[2816]: I1216 03:16:49.188937 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94t8p\" (UniqueName: \"kubernetes.io/projected/bcaf6246-65b7-4815-9dce-b3c0cc5974b8-kube-api-access-94t8p\") pod \"calico-typha-848b49bb7d-4dmzc\" (UID: \"bcaf6246-65b7-4815-9dce-b3c0cc5974b8\") " pod="calico-system/calico-typha-848b49bb7d-4dmzc" Dec 16 03:16:49.189825 kubelet[2816]: I1216 03:16:49.189696 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bcaf6246-65b7-4815-9dce-b3c0cc5974b8-typha-certs\") pod \"calico-typha-848b49bb7d-4dmzc\" (UID: \"bcaf6246-65b7-4815-9dce-b3c0cc5974b8\") " pod="calico-system/calico-typha-848b49bb7d-4dmzc" Dec 16 03:16:49.189825 kubelet[2816]: I1216 03:16:49.189730 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcaf6246-65b7-4815-9dce-b3c0cc5974b8-tigera-ca-bundle\") pod \"calico-typha-848b49bb7d-4dmzc\" (UID: \"bcaf6246-65b7-4815-9dce-b3c0cc5974b8\") " pod="calico-system/calico-typha-848b49bb7d-4dmzc" Dec 16 03:16:49.269294 systemd[1]: Created slice kubepods-besteffort-podda6e62d2_a587_48e0_9a95_29a2fefa69f5.slice - libcontainer container kubepods-besteffort-podda6e62d2_a587_48e0_9a95_29a2fefa69f5.slice. Dec 16 03:16:49.293528 kubelet[2816]: I1216 03:16:49.293484 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/da6e62d2-a587-48e0-9a95-29a2fefa69f5-cni-net-dir\") pod \"calico-node-vrxzh\" (UID: \"da6e62d2-a587-48e0-9a95-29a2fefa69f5\") " pod="calico-system/calico-node-vrxzh" Dec 16 03:16:49.293692 kubelet[2816]: I1216 03:16:49.293562 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da6e62d2-a587-48e0-9a95-29a2fefa69f5-xtables-lock\") pod \"calico-node-vrxzh\" (UID: \"da6e62d2-a587-48e0-9a95-29a2fefa69f5\") " pod="calico-system/calico-node-vrxzh" Dec 16 03:16:49.293692 kubelet[2816]: I1216 03:16:49.293629 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/da6e62d2-a587-48e0-9a95-29a2fefa69f5-cni-bin-dir\") pod \"calico-node-vrxzh\" (UID: \"da6e62d2-a587-48e0-9a95-29a2fefa69f5\") " pod="calico-system/calico-node-vrxzh" Dec 16 03:16:49.293692 kubelet[2816]: I1216 03:16:49.293651 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da6e62d2-a587-48e0-9a95-29a2fefa69f5-tigera-ca-bundle\") pod \"calico-node-vrxzh\" (UID: \"da6e62d2-a587-48e0-9a95-29a2fefa69f5\") " pod="calico-system/calico-node-vrxzh" Dec 16 03:16:49.293692 kubelet[2816]: I1216 03:16:49.293676 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/da6e62d2-a587-48e0-9a95-29a2fefa69f5-node-certs\") pod \"calico-node-vrxzh\" (UID: \"da6e62d2-a587-48e0-9a95-29a2fefa69f5\") " pod="calico-system/calico-node-vrxzh" Dec 16 03:16:49.294360 kubelet[2816]: I1216 03:16:49.293703 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/da6e62d2-a587-48e0-9a95-29a2fefa69f5-policysync\") pod \"calico-node-vrxzh\" (UID: \"da6e62d2-a587-48e0-9a95-29a2fefa69f5\") " pod="calico-system/calico-node-vrxzh" Dec 16 03:16:49.294360 kubelet[2816]: I1216 03:16:49.293730 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/da6e62d2-a587-48e0-9a95-29a2fefa69f5-var-run-calico\") pod \"calico-node-vrxzh\" (UID: \"da6e62d2-a587-48e0-9a95-29a2fefa69f5\") " pod="calico-system/calico-node-vrxzh" Dec 16 03:16:49.294360 kubelet[2816]: I1216 03:16:49.293778 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/da6e62d2-a587-48e0-9a95-29a2fefa69f5-cni-log-dir\") pod \"calico-node-vrxzh\" (UID: \"da6e62d2-a587-48e0-9a95-29a2fefa69f5\") " pod="calico-system/calico-node-vrxzh" Dec 16 03:16:49.294360 kubelet[2816]: I1216 03:16:49.293806 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/da6e62d2-a587-48e0-9a95-29a2fefa69f5-flexvol-driver-host\") pod \"calico-node-vrxzh\" (UID: \"da6e62d2-a587-48e0-9a95-29a2fefa69f5\") " pod="calico-system/calico-node-vrxzh" Dec 16 03:16:49.294360 kubelet[2816]: I1216 03:16:49.293839 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/da6e62d2-a587-48e0-9a95-29a2fefa69f5-var-lib-calico\") pod \"calico-node-vrxzh\" (UID: \"da6e62d2-a587-48e0-9a95-29a2fefa69f5\") " pod="calico-system/calico-node-vrxzh" Dec 16 03:16:49.294512 kubelet[2816]: I1216 03:16:49.293904 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da6e62d2-a587-48e0-9a95-29a2fefa69f5-lib-modules\") pod \"calico-node-vrxzh\" (UID: \"da6e62d2-a587-48e0-9a95-29a2fefa69f5\") " pod="calico-system/calico-node-vrxzh" Dec 16 03:16:49.294512 kubelet[2816]: I1216 03:16:49.293979 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdnd8\" (UniqueName: \"kubernetes.io/projected/da6e62d2-a587-48e0-9a95-29a2fefa69f5-kube-api-access-rdnd8\") pod \"calico-node-vrxzh\" (UID: \"da6e62d2-a587-48e0-9a95-29a2fefa69f5\") " pod="calico-system/calico-node-vrxzh" Dec 16 03:16:49.320000 audit[3233]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:49.320000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffef57423a0 a2=0 a3=7ffef574238c items=0 ppid=2942 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:49.322000 audit[3233]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:49.322000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffef57423a0 a2=0 a3=0 items=0 ppid=2942 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:49.383471 containerd[1630]: time="2025-12-16T03:16:49.383343663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-848b49bb7d-4dmzc,Uid:bcaf6246-65b7-4815-9dce-b3c0cc5974b8,Namespace:calico-system,Attempt:0,}" Dec 16 03:16:49.402913 kubelet[2816]: E1216 03:16:49.402853 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.402913 kubelet[2816]: W1216 03:16:49.402872 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.405791 kubelet[2816]: E1216 03:16:49.404891 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.405791 kubelet[2816]: W1216 03:16:49.404905 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.411447 kubelet[2816]: E1216 03:16:49.410233 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.411447 kubelet[2816]: E1216 03:16:49.410651 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.411447 kubelet[2816]: W1216 03:16:49.410661 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.411447 kubelet[2816]: E1216 03:16:49.410671 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.411447 kubelet[2816]: E1216 03:16:49.410721 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.423043 containerd[1630]: time="2025-12-16T03:16:49.422962650Z" level=info msg="connecting to shim 255790d87719f7a099ed970a0c6443cd93ea87783df2f3fb98dfcc642c6067e5" address="unix:///run/containerd/s/8f68092749445906fa7f611989adfbe52d0d4d86139fd8ead01807a53e63d698" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:16:49.451907 systemd[1]: Started cri-containerd-255790d87719f7a099ed970a0c6443cd93ea87783df2f3fb98dfcc642c6067e5.scope - libcontainer container 255790d87719f7a099ed970a0c6443cd93ea87783df2f3fb98dfcc642c6067e5. Dec 16 03:16:49.466099 kubelet[2816]: E1216 03:16:49.465969 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:16:49.474082 kubelet[2816]: E1216 03:16:49.474034 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.474082 kubelet[2816]: W1216 03:16:49.474054 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.474082 kubelet[2816]: E1216 03:16:49.474071 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.474365 kubelet[2816]: E1216 03:16:49.474219 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.474365 kubelet[2816]: W1216 03:16:49.474225 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.474365 kubelet[2816]: E1216 03:16:49.474232 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.474419 kubelet[2816]: E1216 03:16:49.474378 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.474419 kubelet[2816]: W1216 03:16:49.474385 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.474419 kubelet[2816]: E1216 03:16:49.474392 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.474683 kubelet[2816]: E1216 03:16:49.474649 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.474683 kubelet[2816]: W1216 03:16:49.474659 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.474683 kubelet[2816]: E1216 03:16:49.474667 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.475726 kubelet[2816]: E1216 03:16:49.475129 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.475726 kubelet[2816]: W1216 03:16:49.475142 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.475726 kubelet[2816]: E1216 03:16:49.475150 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.475726 kubelet[2816]: E1216 03:16:49.475541 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.475726 kubelet[2816]: W1216 03:16:49.475549 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.475726 kubelet[2816]: E1216 03:16:49.475556 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.476170 kubelet[2816]: E1216 03:16:49.476125 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.476170 kubelet[2816]: W1216 03:16:49.476134 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.476170 kubelet[2816]: E1216 03:16:49.476141 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.476805 kubelet[2816]: E1216 03:16:49.476738 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.476916 kubelet[2816]: W1216 03:16:49.476855 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.476916 kubelet[2816]: E1216 03:16:49.476865 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.477270 kubelet[2816]: E1216 03:16:49.477140 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.477270 kubelet[2816]: W1216 03:16:49.477149 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.477270 kubelet[2816]: E1216 03:16:49.477157 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.477865 kubelet[2816]: E1216 03:16:49.477791 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.477865 kubelet[2816]: W1216 03:16:49.477801 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.477865 kubelet[2816]: E1216 03:16:49.477809 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.478151 kubelet[2816]: E1216 03:16:49.478117 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.478151 kubelet[2816]: W1216 03:16:49.478130 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.478151 kubelet[2816]: E1216 03:16:49.478137 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.478797 kubelet[2816]: E1216 03:16:49.478394 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.478797 kubelet[2816]: W1216 03:16:49.478403 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.478797 kubelet[2816]: E1216 03:16:49.478411 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.479030 kubelet[2816]: E1216 03:16:49.479008 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.479030 kubelet[2816]: W1216 03:16:49.479022 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.479158 kubelet[2816]: E1216 03:16:49.479143 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.478000 audit: BPF prog-id=151 op=LOAD Dec 16 03:16:49.479545 kubelet[2816]: E1216 03:16:49.479319 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.479545 kubelet[2816]: W1216 03:16:49.479327 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.479545 kubelet[2816]: E1216 03:16:49.479334 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.479545 kubelet[2816]: E1216 03:16:49.479468 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.479664 kubelet[2816]: W1216 03:16:49.479476 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.479664 kubelet[2816]: E1216 03:16:49.479580 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.479975 kubelet[2816]: E1216 03:16:49.479832 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.479975 kubelet[2816]: W1216 03:16:49.479923 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.479975 kubelet[2816]: E1216 03:16:49.479935 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.480315 kubelet[2816]: E1216 03:16:49.480150 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.480315 kubelet[2816]: W1216 03:16:49.480183 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.480315 kubelet[2816]: E1216 03:16:49.480191 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.478000 audit: BPF prog-id=152 op=LOAD Dec 16 03:16:49.478000 audit[3258]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3247 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.480565 kubelet[2816]: E1216 03:16:49.480542 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.480565 kubelet[2816]: W1216 03:16:49.480554 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.480685 kubelet[2816]: E1216 03:16:49.480663 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235353739306438373731396637613039396564393730613063363434 Dec 16 03:16:49.480000 audit: BPF prog-id=152 op=UNLOAD Dec 16 03:16:49.480000 audit[3258]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3247 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235353739306438373731396637613039396564393730613063363434 Dec 16 03:16:49.480000 audit: BPF prog-id=153 op=LOAD Dec 16 03:16:49.480000 audit[3258]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3247 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.480000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235353739306438373731396637613039396564393730613063363434 Dec 16 03:16:49.481538 kubelet[2816]: E1216 03:16:49.481466 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.481538 kubelet[2816]: W1216 03:16:49.481473 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.481538 kubelet[2816]: E1216 03:16:49.481481 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.482169 kubelet[2816]: E1216 03:16:49.482152 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.482169 kubelet[2816]: W1216 03:16:49.482165 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.482226 kubelet[2816]: E1216 03:16:49.482173 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.481000 audit: BPF prog-id=154 op=LOAD Dec 16 03:16:49.481000 audit[3258]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3247 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235353739306438373731396637613039396564393730613063363434 Dec 16 03:16:49.481000 audit: BPF prog-id=154 op=UNLOAD Dec 16 03:16:49.481000 audit[3258]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3247 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235353739306438373731396637613039396564393730613063363434 Dec 16 03:16:49.481000 audit: BPF prog-id=153 op=UNLOAD Dec 16 03:16:49.481000 audit[3258]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3247 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235353739306438373731396637613039396564393730613063363434 Dec 16 03:16:49.481000 audit: BPF prog-id=155 op=LOAD Dec 16 03:16:49.481000 audit[3258]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3247 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.481000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235353739306438373731396637613039396564393730613063363434 Dec 16 03:16:49.495744 kubelet[2816]: E1216 03:16:49.495727 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.495744 kubelet[2816]: W1216 03:16:49.495806 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.495744 kubelet[2816]: E1216 03:16:49.495822 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.495744 kubelet[2816]: I1216 03:16:49.495853 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f212018-4b88-48fc-94d2-420427ed0241-kubelet-dir\") pod \"csi-node-driver-vbf4g\" (UID: \"8f212018-4b88-48fc-94d2-420427ed0241\") " pod="calico-system/csi-node-driver-vbf4g" Dec 16 03:16:49.496207 kubelet[2816]: E1216 03:16:49.496185 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.496207 kubelet[2816]: W1216 03:16:49.496196 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.496331 kubelet[2816]: E1216 03:16:49.496275 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.496331 kubelet[2816]: I1216 03:16:49.496295 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8f212018-4b88-48fc-94d2-420427ed0241-registration-dir\") pod \"csi-node-driver-vbf4g\" (UID: \"8f212018-4b88-48fc-94d2-420427ed0241\") " pod="calico-system/csi-node-driver-vbf4g" Dec 16 03:16:49.496661 kubelet[2816]: E1216 03:16:49.496639 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.496661 kubelet[2816]: W1216 03:16:49.496651 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.496777 kubelet[2816]: E1216 03:16:49.496727 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.496986 kubelet[2816]: I1216 03:16:49.496745 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmlln\" (UniqueName: \"kubernetes.io/projected/8f212018-4b88-48fc-94d2-420427ed0241-kube-api-access-wmlln\") pod \"csi-node-driver-vbf4g\" (UID: \"8f212018-4b88-48fc-94d2-420427ed0241\") " pod="calico-system/csi-node-driver-vbf4g" Dec 16 03:16:49.497089 kubelet[2816]: E1216 03:16:49.497071 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.497089 kubelet[2816]: W1216 03:16:49.497079 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.497274 kubelet[2816]: E1216 03:16:49.497265 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.497431 kubelet[2816]: E1216 03:16:49.497412 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.497431 kubelet[2816]: W1216 03:16:49.497421 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.497628 kubelet[2816]: E1216 03:16:49.497540 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.497748 kubelet[2816]: E1216 03:16:49.497740 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.497840 kubelet[2816]: W1216 03:16:49.497826 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.497971 kubelet[2816]: E1216 03:16:49.497908 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.497971 kubelet[2816]: I1216 03:16:49.497925 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8f212018-4b88-48fc-94d2-420427ed0241-socket-dir\") pod \"csi-node-driver-vbf4g\" (UID: \"8f212018-4b88-48fc-94d2-420427ed0241\") " pod="calico-system/csi-node-driver-vbf4g" Dec 16 03:16:49.498254 kubelet[2816]: E1216 03:16:49.498223 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.498254 kubelet[2816]: W1216 03:16:49.498233 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.498350 kubelet[2816]: E1216 03:16:49.498321 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.498561 kubelet[2816]: E1216 03:16:49.498534 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.498561 kubelet[2816]: W1216 03:16:49.498542 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.498561 kubelet[2816]: E1216 03:16:49.498549 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.498916 kubelet[2816]: E1216 03:16:49.498897 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.498916 kubelet[2816]: W1216 03:16:49.498906 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.499046 kubelet[2816]: E1216 03:16:49.498985 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.499046 kubelet[2816]: I1216 03:16:49.499002 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8f212018-4b88-48fc-94d2-420427ed0241-varrun\") pod \"csi-node-driver-vbf4g\" (UID: \"8f212018-4b88-48fc-94d2-420427ed0241\") " pod="calico-system/csi-node-driver-vbf4g" Dec 16 03:16:49.499326 kubelet[2816]: E1216 03:16:49.499316 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.499431 kubelet[2816]: W1216 03:16:49.499380 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.499431 kubelet[2816]: E1216 03:16:49.499395 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.499681 kubelet[2816]: E1216 03:16:49.499650 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.499681 kubelet[2816]: W1216 03:16:49.499660 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.499681 kubelet[2816]: E1216 03:16:49.499668 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.500935 kubelet[2816]: E1216 03:16:49.500926 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.501063 kubelet[2816]: W1216 03:16:49.501000 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.501063 kubelet[2816]: E1216 03:16:49.501012 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.501259 kubelet[2816]: E1216 03:16:49.501240 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.501259 kubelet[2816]: W1216 03:16:49.501249 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.501387 kubelet[2816]: E1216 03:16:49.501331 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.501486 kubelet[2816]: E1216 03:16:49.501462 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.501486 kubelet[2816]: W1216 03:16:49.501470 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.501486 kubelet[2816]: E1216 03:16:49.501477 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.501774 kubelet[2816]: E1216 03:16:49.501724 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.501774 kubelet[2816]: W1216 03:16:49.501733 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.501841 kubelet[2816]: E1216 03:16:49.501751 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.527437 containerd[1630]: time="2025-12-16T03:16:49.527289471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-848b49bb7d-4dmzc,Uid:bcaf6246-65b7-4815-9dce-b3c0cc5974b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"255790d87719f7a099ed970a0c6443cd93ea87783df2f3fb98dfcc642c6067e5\"" Dec 16 03:16:49.529379 containerd[1630]: time="2025-12-16T03:16:49.529349370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 03:16:49.594083 containerd[1630]: time="2025-12-16T03:16:49.594048996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vrxzh,Uid:da6e62d2-a587-48e0-9a95-29a2fefa69f5,Namespace:calico-system,Attempt:0,}" Dec 16 03:16:49.601738 kubelet[2816]: E1216 03:16:49.601702 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.601738 kubelet[2816]: W1216 03:16:49.601723 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.601966 kubelet[2816]: E1216 03:16:49.601744 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.602009 kubelet[2816]: E1216 03:16:49.601996 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.602191 kubelet[2816]: W1216 03:16:49.602008 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.602191 kubelet[2816]: E1216 03:16:49.602028 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.602304 kubelet[2816]: E1216 03:16:49.602286 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.602304 kubelet[2816]: W1216 03:16:49.602302 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.602385 kubelet[2816]: E1216 03:16:49.602319 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.602595 kubelet[2816]: E1216 03:16:49.602579 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.602660 kubelet[2816]: W1216 03:16:49.602595 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.602660 kubelet[2816]: E1216 03:16:49.602634 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.602841 kubelet[2816]: E1216 03:16:49.602825 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.602841 kubelet[2816]: W1216 03:16:49.602839 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.602918 kubelet[2816]: E1216 03:16:49.602884 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.603158 kubelet[2816]: E1216 03:16:49.603142 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.603158 kubelet[2816]: W1216 03:16:49.603154 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.603263 kubelet[2816]: E1216 03:16:49.603179 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.603665 kubelet[2816]: E1216 03:16:49.603647 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.603665 kubelet[2816]: W1216 03:16:49.603660 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.603804 kubelet[2816]: E1216 03:16:49.603673 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.604214 kubelet[2816]: E1216 03:16:49.604175 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.604214 kubelet[2816]: W1216 03:16:49.604191 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.604314 kubelet[2816]: E1216 03:16:49.604291 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.604656 kubelet[2816]: E1216 03:16:49.604526 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.604656 kubelet[2816]: W1216 03:16:49.604542 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.604656 kubelet[2816]: E1216 03:16:49.604652 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.605071 kubelet[2816]: E1216 03:16:49.604837 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.605071 kubelet[2816]: W1216 03:16:49.604847 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.605071 kubelet[2816]: E1216 03:16:49.604869 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.605646 kubelet[2816]: E1216 03:16:49.605077 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.605646 kubelet[2816]: W1216 03:16:49.605087 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.605646 kubelet[2816]: E1216 03:16:49.605102 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.605646 kubelet[2816]: E1216 03:16:49.605418 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.605646 kubelet[2816]: W1216 03:16:49.605428 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.605646 kubelet[2816]: E1216 03:16:49.605472 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.606358 kubelet[2816]: E1216 03:16:49.605738 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.606358 kubelet[2816]: W1216 03:16:49.605748 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.606358 kubelet[2816]: E1216 03:16:49.605899 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.606358 kubelet[2816]: E1216 03:16:49.606076 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.606358 kubelet[2816]: W1216 03:16:49.606086 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.606358 kubelet[2816]: E1216 03:16:49.606189 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.606358 kubelet[2816]: E1216 03:16:49.606351 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.606358 kubelet[2816]: W1216 03:16:49.606360 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.607110 kubelet[2816]: E1216 03:16:49.606787 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.607110 kubelet[2816]: E1216 03:16:49.606968 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.607110 kubelet[2816]: W1216 03:16:49.606977 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.607110 kubelet[2816]: E1216 03:16:49.607093 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.607438 kubelet[2816]: E1216 03:16:49.607154 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.607438 kubelet[2816]: W1216 03:16:49.607163 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.607438 kubelet[2816]: E1216 03:16:49.607187 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.608062 kubelet[2816]: E1216 03:16:49.608035 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.608062 kubelet[2816]: W1216 03:16:49.608050 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.608151 kubelet[2816]: E1216 03:16:49.608075 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.608427 kubelet[2816]: E1216 03:16:49.608256 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.608427 kubelet[2816]: W1216 03:16:49.608271 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.608659 kubelet[2816]: E1216 03:16:49.608428 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.608659 kubelet[2816]: E1216 03:16:49.608500 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.608659 kubelet[2816]: W1216 03:16:49.608509 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.609221 kubelet[2816]: E1216 03:16:49.608684 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.609221 kubelet[2816]: E1216 03:16:49.608927 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.609221 kubelet[2816]: W1216 03:16:49.608936 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.609221 kubelet[2816]: E1216 03:16:49.609061 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.609613 kubelet[2816]: E1216 03:16:49.609187 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.609613 kubelet[2816]: W1216 03:16:49.609271 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.609613 kubelet[2816]: E1216 03:16:49.609296 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.610083 kubelet[2816]: E1216 03:16:49.609926 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.610083 kubelet[2816]: W1216 03:16:49.609937 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.610083 kubelet[2816]: E1216 03:16:49.609961 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.610460 kubelet[2816]: E1216 03:16:49.610323 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.610460 kubelet[2816]: W1216 03:16:49.610333 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.610460 kubelet[2816]: E1216 03:16:49.610374 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.611036 kubelet[2816]: E1216 03:16:49.610977 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.611036 kubelet[2816]: W1216 03:16:49.610993 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.611036 kubelet[2816]: E1216 03:16:49.611004 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.618386 kubelet[2816]: E1216 03:16:49.618366 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:49.618386 kubelet[2816]: W1216 03:16:49.618381 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:49.619110 kubelet[2816]: E1216 03:16:49.618394 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:49.619149 containerd[1630]: time="2025-12-16T03:16:49.618625898Z" level=info msg="connecting to shim d2f317b5451178eb27a86287c491b45c96fab0464b4a7e54a399c8579d82e3ca" address="unix:///run/containerd/s/6f3f2853b7142c9d54bbfe73106434c19e7d29ddbe4df46ffb2aa974d38501fc" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:16:49.639938 systemd[1]: Started cri-containerd-d2f317b5451178eb27a86287c491b45c96fab0464b4a7e54a399c8579d82e3ca.scope - libcontainer container d2f317b5451178eb27a86287c491b45c96fab0464b4a7e54a399c8579d82e3ca. Dec 16 03:16:49.652000 audit: BPF prog-id=156 op=LOAD Dec 16 03:16:49.653000 audit: BPF prog-id=157 op=LOAD Dec 16 03:16:49.653000 audit[3379]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3365 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432663331376235343531313738656232376138363238376334393162 Dec 16 03:16:49.653000 audit: BPF prog-id=157 op=UNLOAD Dec 16 03:16:49.653000 audit[3379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432663331376235343531313738656232376138363238376334393162 Dec 16 03:16:49.653000 audit: BPF prog-id=158 op=LOAD Dec 16 03:16:49.653000 audit[3379]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3365 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432663331376235343531313738656232376138363238376334393162 Dec 16 03:16:49.653000 audit: BPF prog-id=159 op=LOAD Dec 16 03:16:49.653000 audit[3379]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3365 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432663331376235343531313738656232376138363238376334393162 Dec 16 03:16:49.653000 audit: BPF prog-id=159 op=UNLOAD Dec 16 03:16:49.653000 audit[3379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432663331376235343531313738656232376138363238376334393162 Dec 16 03:16:49.653000 audit: BPF prog-id=158 op=UNLOAD Dec 16 03:16:49.653000 audit[3379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432663331376235343531313738656232376138363238376334393162 Dec 16 03:16:49.653000 audit: BPF prog-id=160 op=LOAD Dec 16 03:16:49.653000 audit[3379]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3365 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:49.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432663331376235343531313738656232376138363238376334393162 Dec 16 03:16:49.668202 containerd[1630]: time="2025-12-16T03:16:49.668161776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vrxzh,Uid:da6e62d2-a587-48e0-9a95-29a2fefa69f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2f317b5451178eb27a86287c491b45c96fab0464b4a7e54a399c8579d82e3ca\"" Dec 16 03:16:51.412059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount44600897.mount: Deactivated successfully. Dec 16 03:16:51.735235 kubelet[2816]: E1216 03:16:51.735118 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:16:52.514547 containerd[1630]: time="2025-12-16T03:16:52.514480040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:52.515435 containerd[1630]: time="2025-12-16T03:16:52.515341282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 03:16:52.516582 containerd[1630]: time="2025-12-16T03:16:52.516274587Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:52.542194 containerd[1630]: time="2025-12-16T03:16:52.525220009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:52.542194 containerd[1630]: time="2025-12-16T03:16:52.525866316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.996489597s" Dec 16 03:16:52.542194 containerd[1630]: time="2025-12-16T03:16:52.541918828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 03:16:52.542867 containerd[1630]: time="2025-12-16T03:16:52.542846611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 03:16:52.553793 containerd[1630]: time="2025-12-16T03:16:52.553742129Z" level=info msg="CreateContainer within sandbox \"255790d87719f7a099ed970a0c6443cd93ea87783df2f3fb98dfcc642c6067e5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 03:16:52.559781 containerd[1630]: time="2025-12-16T03:16:52.559474771Z" level=info msg="Container 2ea62c44d26dbad9d5313abcccad1f3145949c84d17a834085b94d90ec4e44f8: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:16:52.587363 containerd[1630]: time="2025-12-16T03:16:52.587314979Z" level=info msg="CreateContainer within sandbox \"255790d87719f7a099ed970a0c6443cd93ea87783df2f3fb98dfcc642c6067e5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2ea62c44d26dbad9d5313abcccad1f3145949c84d17a834085b94d90ec4e44f8\"" Dec 16 03:16:52.588827 containerd[1630]: time="2025-12-16T03:16:52.588800312Z" level=info msg="StartContainer for \"2ea62c44d26dbad9d5313abcccad1f3145949c84d17a834085b94d90ec4e44f8\"" Dec 16 03:16:52.589915 containerd[1630]: time="2025-12-16T03:16:52.589889257Z" level=info msg="connecting to shim 2ea62c44d26dbad9d5313abcccad1f3145949c84d17a834085b94d90ec4e44f8" address="unix:///run/containerd/s/8f68092749445906fa7f611989adfbe52d0d4d86139fd8ead01807a53e63d698" protocol=ttrpc version=3 Dec 16 03:16:52.643202 systemd[1]: Started cri-containerd-2ea62c44d26dbad9d5313abcccad1f3145949c84d17a834085b94d90ec4e44f8.scope - libcontainer container 2ea62c44d26dbad9d5313abcccad1f3145949c84d17a834085b94d90ec4e44f8. Dec 16 03:16:52.659798 kernel: kauditd_printk_skb: 75 callbacks suppressed Dec 16 03:16:52.659892 kernel: audit: type=1334 audit(1765855012.656:556): prog-id=161 op=LOAD Dec 16 03:16:52.656000 audit: BPF prog-id=161 op=LOAD Dec 16 03:16:52.656000 audit: BPF prog-id=162 op=LOAD Dec 16 03:16:52.663233 kernel: audit: type=1334 audit(1765855012.656:557): prog-id=162 op=LOAD Dec 16 03:16:52.672979 kernel: audit: type=1300 audit(1765855012.656:557): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3247 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:52.656000 audit[3415]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3247 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:52.680649 kernel: audit: type=1327 audit(1765855012.656:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613632633434643236646261643964353331336162636363616431 Dec 16 03:16:52.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613632633434643236646261643964353331336162636363616431 Dec 16 03:16:52.682566 kernel: audit: type=1334 audit(1765855012.656:558): prog-id=162 op=UNLOAD Dec 16 03:16:52.656000 audit: BPF prog-id=162 op=UNLOAD Dec 16 03:16:52.687789 kernel: audit: type=1300 audit(1765855012.656:558): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3247 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:52.656000 audit[3415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3247 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:52.694884 kernel: audit: type=1327 audit(1765855012.656:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613632633434643236646261643964353331336162636363616431 Dec 16 03:16:52.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613632633434643236646261643964353331336162636363616431 Dec 16 03:16:52.696729 kernel: audit: type=1334 audit(1765855012.656:559): prog-id=163 op=LOAD Dec 16 03:16:52.656000 audit: BPF prog-id=163 op=LOAD Dec 16 03:16:52.656000 audit[3415]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3247 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:52.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613632633434643236646261643964353331336162636363616431 Dec 16 03:16:52.704583 kernel: audit: type=1300 audit(1765855012.656:559): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3247 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:52.704622 kernel: audit: type=1327 audit(1765855012.656:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613632633434643236646261643964353331336162636363616431 Dec 16 03:16:52.656000 audit: BPF prog-id=164 op=LOAD Dec 16 03:16:52.656000 audit[3415]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3247 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:52.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613632633434643236646261643964353331336162636363616431 Dec 16 03:16:52.656000 audit: BPF prog-id=164 op=UNLOAD Dec 16 03:16:52.656000 audit[3415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3247 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:52.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613632633434643236646261643964353331336162636363616431 Dec 16 03:16:52.656000 audit: BPF prog-id=163 op=UNLOAD Dec 16 03:16:52.656000 audit[3415]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3247 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:52.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613632633434643236646261643964353331336162636363616431 Dec 16 03:16:52.656000 audit: BPF prog-id=165 op=LOAD Dec 16 03:16:52.656000 audit[3415]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3247 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:52.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613632633434643236646261643964353331336162636363616431 Dec 16 03:16:52.715453 containerd[1630]: time="2025-12-16T03:16:52.715263243Z" level=info msg="StartContainer for \"2ea62c44d26dbad9d5313abcccad1f3145949c84d17a834085b94d90ec4e44f8\" returns successfully" Dec 16 03:16:52.893897 kubelet[2816]: I1216 03:16:52.893795 2816 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-848b49bb7d-4dmzc" podStartSLOduration=0.880178351 podStartE2EDuration="3.893779112s" podCreationTimestamp="2025-12-16 03:16:49 +0000 UTC" firstStartedPulling="2025-12-16 03:16:49.529062482 +0000 UTC m=+22.897953190" lastFinishedPulling="2025-12-16 03:16:52.542663244 +0000 UTC m=+25.911553951" observedRunningTime="2025-12-16 03:16:52.893567594 +0000 UTC m=+26.262458303" watchObservedRunningTime="2025-12-16 03:16:52.893779112 +0000 UTC m=+26.262669830" Dec 16 03:16:52.901927 kubelet[2816]: E1216 03:16:52.901832 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.901927 kubelet[2816]: W1216 03:16:52.901850 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.901927 kubelet[2816]: E1216 03:16:52.901866 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.902256 kubelet[2816]: E1216 03:16:52.902240 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.902256 kubelet[2816]: W1216 03:16:52.902254 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.902320 kubelet[2816]: E1216 03:16:52.902267 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.902639 kubelet[2816]: E1216 03:16:52.902405 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.902639 kubelet[2816]: W1216 03:16:52.902449 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.902639 kubelet[2816]: E1216 03:16:52.902459 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.902639 kubelet[2816]: E1216 03:16:52.902617 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.902639 kubelet[2816]: W1216 03:16:52.902624 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.902639 kubelet[2816]: E1216 03:16:52.902630 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.902773 kubelet[2816]: E1216 03:16:52.902740 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.902773 kubelet[2816]: W1216 03:16:52.902746 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.902813 kubelet[2816]: E1216 03:16:52.902773 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.902881 kubelet[2816]: E1216 03:16:52.902867 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.902881 kubelet[2816]: W1216 03:16:52.902878 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.902919 kubelet[2816]: E1216 03:16:52.902884 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.902985 kubelet[2816]: E1216 03:16:52.902972 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.902985 kubelet[2816]: W1216 03:16:52.902983 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.903023 kubelet[2816]: E1216 03:16:52.902989 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.903093 kubelet[2816]: E1216 03:16:52.903080 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.903093 kubelet[2816]: W1216 03:16:52.903091 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.903132 kubelet[2816]: E1216 03:16:52.903097 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.903206 kubelet[2816]: E1216 03:16:52.903193 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.903206 kubelet[2816]: W1216 03:16:52.903204 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.903244 kubelet[2816]: E1216 03:16:52.903210 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.903353 kubelet[2816]: E1216 03:16:52.903296 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.903353 kubelet[2816]: W1216 03:16:52.903307 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.903353 kubelet[2816]: E1216 03:16:52.903312 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.903413 kubelet[2816]: E1216 03:16:52.903400 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.903413 kubelet[2816]: W1216 03:16:52.903406 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.903446 kubelet[2816]: E1216 03:16:52.903415 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.903550 kubelet[2816]: E1216 03:16:52.903522 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.903550 kubelet[2816]: W1216 03:16:52.903533 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.903550 kubelet[2816]: E1216 03:16:52.903539 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.903685 kubelet[2816]: E1216 03:16:52.903671 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.903685 kubelet[2816]: W1216 03:16:52.903682 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.903726 kubelet[2816]: E1216 03:16:52.903688 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.903863 kubelet[2816]: E1216 03:16:52.903850 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.903863 kubelet[2816]: W1216 03:16:52.903861 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.903925 kubelet[2816]: E1216 03:16:52.903868 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.904054 kubelet[2816]: E1216 03:16:52.904025 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.904054 kubelet[2816]: W1216 03:16:52.904037 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.904054 kubelet[2816]: E1216 03:16:52.904045 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.932412 kubelet[2816]: E1216 03:16:52.932382 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.932412 kubelet[2816]: W1216 03:16:52.932405 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.932412 kubelet[2816]: E1216 03:16:52.932423 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.932778 kubelet[2816]: E1216 03:16:52.932642 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.932778 kubelet[2816]: W1216 03:16:52.932655 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.933810 kubelet[2816]: E1216 03:16:52.933789 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.933984 kubelet[2816]: E1216 03:16:52.933965 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.933984 kubelet[2816]: W1216 03:16:52.933981 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.934086 kubelet[2816]: E1216 03:16:52.933991 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.934116 kubelet[2816]: E1216 03:16:52.934107 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.934116 kubelet[2816]: W1216 03:16:52.934114 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.934151 kubelet[2816]: E1216 03:16:52.934132 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.934271 kubelet[2816]: E1216 03:16:52.934256 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.934271 kubelet[2816]: W1216 03:16:52.934268 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.934320 kubelet[2816]: E1216 03:16:52.934284 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.934440 kubelet[2816]: E1216 03:16:52.934426 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.934440 kubelet[2816]: W1216 03:16:52.934437 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.934481 kubelet[2816]: E1216 03:16:52.934453 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.934613 kubelet[2816]: E1216 03:16:52.934600 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.934613 kubelet[2816]: W1216 03:16:52.934611 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.934664 kubelet[2816]: E1216 03:16:52.934618 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.935010 kubelet[2816]: E1216 03:16:52.934993 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.935010 kubelet[2816]: W1216 03:16:52.935006 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.935069 kubelet[2816]: E1216 03:16:52.935016 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.935194 kubelet[2816]: E1216 03:16:52.935175 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.935194 kubelet[2816]: W1216 03:16:52.935187 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.935253 kubelet[2816]: E1216 03:16:52.935199 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.935359 kubelet[2816]: E1216 03:16:52.935344 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.935359 kubelet[2816]: W1216 03:16:52.935355 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.935406 kubelet[2816]: E1216 03:16:52.935372 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.935894 kubelet[2816]: E1216 03:16:52.935879 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.935894 kubelet[2816]: W1216 03:16:52.935893 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.935993 kubelet[2816]: E1216 03:16:52.935914 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.936845 kubelet[2816]: E1216 03:16:52.936829 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.936845 kubelet[2816]: W1216 03:16:52.936843 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.936904 kubelet[2816]: E1216 03:16:52.936864 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.937583 kubelet[2816]: E1216 03:16:52.937566 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.937583 kubelet[2816]: W1216 03:16:52.937579 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.937644 kubelet[2816]: E1216 03:16:52.937597 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.937772 kubelet[2816]: E1216 03:16:52.937743 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.937808 kubelet[2816]: W1216 03:16:52.937767 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.937879 kubelet[2816]: E1216 03:16:52.937859 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.937908 kubelet[2816]: E1216 03:16:52.937899 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.937908 kubelet[2816]: W1216 03:16:52.937904 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.937944 kubelet[2816]: E1216 03:16:52.937913 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.938037 kubelet[2816]: E1216 03:16:52.938025 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.938066 kubelet[2816]: W1216 03:16:52.938037 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.938066 kubelet[2816]: E1216 03:16:52.938044 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.938596 kubelet[2816]: E1216 03:16:52.938582 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.938596 kubelet[2816]: W1216 03:16:52.938594 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.938649 kubelet[2816]: E1216 03:16:52.938602 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:52.938865 kubelet[2816]: E1216 03:16:52.938852 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:52.938865 kubelet[2816]: W1216 03:16:52.938864 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:52.938913 kubelet[2816]: E1216 03:16:52.938871 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.735527 kubelet[2816]: E1216 03:16:53.735044 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:16:53.844579 kubelet[2816]: I1216 03:16:53.844533 2816 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:16:53.911552 kubelet[2816]: E1216 03:16:53.911513 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.911552 kubelet[2816]: W1216 03:16:53.911537 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.911552 kubelet[2816]: E1216 03:16:53.911557 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.912485 kubelet[2816]: E1216 03:16:53.911689 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.912485 kubelet[2816]: W1216 03:16:53.911698 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.912485 kubelet[2816]: E1216 03:16:53.911706 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.912485 kubelet[2816]: E1216 03:16:53.911928 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.912485 kubelet[2816]: W1216 03:16:53.911937 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.912485 kubelet[2816]: E1216 03:16:53.911947 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.912485 kubelet[2816]: E1216 03:16:53.912179 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.912485 kubelet[2816]: W1216 03:16:53.912189 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.912485 kubelet[2816]: E1216 03:16:53.912198 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.912998 kubelet[2816]: E1216 03:16:53.912554 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.912998 kubelet[2816]: W1216 03:16:53.912564 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.912998 kubelet[2816]: E1216 03:16:53.912573 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.913296 kubelet[2816]: E1216 03:16:53.913278 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.913296 kubelet[2816]: W1216 03:16:53.913294 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.913296 kubelet[2816]: E1216 03:16:53.913305 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.913516 kubelet[2816]: E1216 03:16:53.913440 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.913516 kubelet[2816]: W1216 03:16:53.913450 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.913516 kubelet[2816]: E1216 03:16:53.913457 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.913632 kubelet[2816]: E1216 03:16:53.913586 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.913632 kubelet[2816]: W1216 03:16:53.913594 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.913632 kubelet[2816]: E1216 03:16:53.913601 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.914479 kubelet[2816]: E1216 03:16:53.914423 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.914479 kubelet[2816]: W1216 03:16:53.914437 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.914479 kubelet[2816]: E1216 03:16:53.914447 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.914698 kubelet[2816]: E1216 03:16:53.914572 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.914698 kubelet[2816]: W1216 03:16:53.914579 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.914698 kubelet[2816]: E1216 03:16:53.914587 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.914844 kubelet[2816]: E1216 03:16:53.914779 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.914844 kubelet[2816]: W1216 03:16:53.914788 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.914844 kubelet[2816]: E1216 03:16:53.914797 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.914918 kubelet[2816]: E1216 03:16:53.914907 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.914918 kubelet[2816]: W1216 03:16:53.914915 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.915008 kubelet[2816]: E1216 03:16:53.914923 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.915136 kubelet[2816]: E1216 03:16:53.915033 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.915136 kubelet[2816]: W1216 03:16:53.915040 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.915136 kubelet[2816]: E1216 03:16:53.915047 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.915314 kubelet[2816]: E1216 03:16:53.915163 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.915314 kubelet[2816]: W1216 03:16:53.915170 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.915314 kubelet[2816]: E1216 03:16:53.915177 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.915314 kubelet[2816]: E1216 03:16:53.915294 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.915314 kubelet[2816]: W1216 03:16:53.915302 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.915314 kubelet[2816]: E1216 03:16:53.915309 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.943889 kubelet[2816]: E1216 03:16:53.943845 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.943889 kubelet[2816]: W1216 03:16:53.943869 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.943889 kubelet[2816]: E1216 03:16:53.943889 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.944335 kubelet[2816]: E1216 03:16:53.944098 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.944335 kubelet[2816]: W1216 03:16:53.944110 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.944335 kubelet[2816]: E1216 03:16:53.944136 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.944335 kubelet[2816]: E1216 03:16:53.944308 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.944335 kubelet[2816]: W1216 03:16:53.944316 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.944335 kubelet[2816]: E1216 03:16:53.944326 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.944486 kubelet[2816]: E1216 03:16:53.944463 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.944486 kubelet[2816]: W1216 03:16:53.944471 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.944486 kubelet[2816]: E1216 03:16:53.944478 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.944675 kubelet[2816]: E1216 03:16:53.944594 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.944675 kubelet[2816]: W1216 03:16:53.944607 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.944675 kubelet[2816]: E1216 03:16:53.944615 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.944929 kubelet[2816]: E1216 03:16:53.944741 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.944929 kubelet[2816]: W1216 03:16:53.944749 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.944929 kubelet[2816]: E1216 03:16:53.944787 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.945345 kubelet[2816]: E1216 03:16:53.944949 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.945345 kubelet[2816]: W1216 03:16:53.944958 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.945345 kubelet[2816]: E1216 03:16:53.944974 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.945619 kubelet[2816]: E1216 03:16:53.945587 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.945709 kubelet[2816]: W1216 03:16:53.945690 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.945900 kubelet[2816]: E1216 03:16:53.945833 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.946023 kubelet[2816]: E1216 03:16:53.945994 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.946023 kubelet[2816]: W1216 03:16:53.946016 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.946117 kubelet[2816]: E1216 03:16:53.946031 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.946226 kubelet[2816]: E1216 03:16:53.946153 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.946226 kubelet[2816]: W1216 03:16:53.946161 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.946226 kubelet[2816]: E1216 03:16:53.946190 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.946361 kubelet[2816]: E1216 03:16:53.946278 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.946361 kubelet[2816]: W1216 03:16:53.946286 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.946361 kubelet[2816]: E1216 03:16:53.946317 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.946521 kubelet[2816]: E1216 03:16:53.946406 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.946521 kubelet[2816]: W1216 03:16:53.946414 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.946521 kubelet[2816]: E1216 03:16:53.946429 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.946701 kubelet[2816]: E1216 03:16:53.946547 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.946701 kubelet[2816]: W1216 03:16:53.946555 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.946701 kubelet[2816]: E1216 03:16:53.946570 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.946898 kubelet[2816]: E1216 03:16:53.946866 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.946898 kubelet[2816]: W1216 03:16:53.946882 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.946898 kubelet[2816]: E1216 03:16:53.946900 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.947132 kubelet[2816]: E1216 03:16:53.947105 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.947132 kubelet[2816]: W1216 03:16:53.947125 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.947196 kubelet[2816]: E1216 03:16:53.947138 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.947529 kubelet[2816]: E1216 03:16:53.947492 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.947529 kubelet[2816]: W1216 03:16:53.947525 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.947617 kubelet[2816]: E1216 03:16:53.947540 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.947791 kubelet[2816]: E1216 03:16:53.947739 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.947828 kubelet[2816]: W1216 03:16:53.947803 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.947828 kubelet[2816]: E1216 03:16:53.947815 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:53.948418 kubelet[2816]: E1216 03:16:53.948393 2816 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:16:53.948418 kubelet[2816]: W1216 03:16:53.948411 2816 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:16:53.948483 kubelet[2816]: E1216 03:16:53.948428 2816 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:16:54.880956 containerd[1630]: time="2025-12-16T03:16:54.880899517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:54.882001 containerd[1630]: time="2025-12-16T03:16:54.881963817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:54.884299 containerd[1630]: time="2025-12-16T03:16:54.883296818Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:54.885394 containerd[1630]: time="2025-12-16T03:16:54.885362524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:16:54.886108 containerd[1630]: time="2025-12-16T03:16:54.886072370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.343129156s" Dec 16 03:16:54.886213 containerd[1630]: time="2025-12-16T03:16:54.886190228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 03:16:54.888768 containerd[1630]: time="2025-12-16T03:16:54.888676414Z" level=info msg="CreateContainer within sandbox \"d2f317b5451178eb27a86287c491b45c96fab0464b4a7e54a399c8579d82e3ca\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 03:16:54.931039 containerd[1630]: time="2025-12-16T03:16:54.930988521Z" level=info msg="Container 6e5b393eac7dbfed4625e69c5160ed11fc9766af4aa69fbf0e938465ab23374f: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:16:54.936821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1732828409.mount: Deactivated successfully. Dec 16 03:16:54.967777 containerd[1630]: time="2025-12-16T03:16:54.967711301Z" level=info msg="CreateContainer within sandbox \"d2f317b5451178eb27a86287c491b45c96fab0464b4a7e54a399c8579d82e3ca\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6e5b393eac7dbfed4625e69c5160ed11fc9766af4aa69fbf0e938465ab23374f\"" Dec 16 03:16:54.968722 containerd[1630]: time="2025-12-16T03:16:54.968565140Z" level=info msg="StartContainer for \"6e5b393eac7dbfed4625e69c5160ed11fc9766af4aa69fbf0e938465ab23374f\"" Dec 16 03:16:54.970501 containerd[1630]: time="2025-12-16T03:16:54.970456821Z" level=info msg="connecting to shim 6e5b393eac7dbfed4625e69c5160ed11fc9766af4aa69fbf0e938465ab23374f" address="unix:///run/containerd/s/6f3f2853b7142c9d54bbfe73106434c19e7d29ddbe4df46ffb2aa974d38501fc" protocol=ttrpc version=3 Dec 16 03:16:54.988987 systemd[1]: Started cri-containerd-6e5b393eac7dbfed4625e69c5160ed11fc9766af4aa69fbf0e938465ab23374f.scope - libcontainer container 6e5b393eac7dbfed4625e69c5160ed11fc9766af4aa69fbf0e938465ab23374f. Dec 16 03:16:55.048000 audit: BPF prog-id=166 op=LOAD Dec 16 03:16:55.048000 audit[3522]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3365 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:55.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665356233393365616337646266656434363235653639633531363065 Dec 16 03:16:55.048000 audit: BPF prog-id=167 op=LOAD Dec 16 03:16:55.048000 audit[3522]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3365 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:55.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665356233393365616337646266656434363235653639633531363065 Dec 16 03:16:55.048000 audit: BPF prog-id=167 op=UNLOAD Dec 16 03:16:55.048000 audit[3522]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:55.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665356233393365616337646266656434363235653639633531363065 Dec 16 03:16:55.048000 audit: BPF prog-id=166 op=UNLOAD Dec 16 03:16:55.048000 audit[3522]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:55.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665356233393365616337646266656434363235653639633531363065 Dec 16 03:16:55.048000 audit: BPF prog-id=168 op=LOAD Dec 16 03:16:55.048000 audit[3522]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3365 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:55.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665356233393365616337646266656434363235653639633531363065 Dec 16 03:16:55.088370 containerd[1630]: time="2025-12-16T03:16:55.088319816Z" level=info msg="StartContainer for \"6e5b393eac7dbfed4625e69c5160ed11fc9766af4aa69fbf0e938465ab23374f\" returns successfully" Dec 16 03:16:55.094252 systemd[1]: cri-containerd-6e5b393eac7dbfed4625e69c5160ed11fc9766af4aa69fbf0e938465ab23374f.scope: Deactivated successfully. Dec 16 03:16:55.097000 audit: BPF prog-id=168 op=UNLOAD Dec 16 03:16:55.122933 containerd[1630]: time="2025-12-16T03:16:55.122860532Z" level=info msg="received container exit event container_id:\"6e5b393eac7dbfed4625e69c5160ed11fc9766af4aa69fbf0e938465ab23374f\" id:\"6e5b393eac7dbfed4625e69c5160ed11fc9766af4aa69fbf0e938465ab23374f\" pid:3535 exited_at:{seconds:1765855015 nanos:96997384}" Dec 16 03:16:55.148407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e5b393eac7dbfed4625e69c5160ed11fc9766af4aa69fbf0e938465ab23374f-rootfs.mount: Deactivated successfully. Dec 16 03:16:55.734074 kubelet[2816]: E1216 03:16:55.733998 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:16:55.853570 containerd[1630]: time="2025-12-16T03:16:55.853477648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 03:16:57.735012 kubelet[2816]: E1216 03:16:57.734911 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:16:59.735663 kubelet[2816]: E1216 03:16:59.735578 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:17:00.347244 containerd[1630]: time="2025-12-16T03:17:00.347180304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:00.348428 containerd[1630]: time="2025-12-16T03:17:00.348256132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 03:17:00.349146 containerd[1630]: time="2025-12-16T03:17:00.349109425Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:00.351045 containerd[1630]: time="2025-12-16T03:17:00.351010159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:00.351583 containerd[1630]: time="2025-12-16T03:17:00.351556134Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.498014403s" Dec 16 03:17:00.351709 containerd[1630]: time="2025-12-16T03:17:00.351638423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 03:17:00.354328 containerd[1630]: time="2025-12-16T03:17:00.354300562Z" level=info msg="CreateContainer within sandbox \"d2f317b5451178eb27a86287c491b45c96fab0464b4a7e54a399c8579d82e3ca\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 03:17:00.364795 containerd[1630]: time="2025-12-16T03:17:00.362494905Z" level=info msg="Container 229b77a7a076ba1037a931d066c5db0072ffe8eb8cb5d6f18b3d3c71a6b7a6e9: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:17:00.387769 containerd[1630]: time="2025-12-16T03:17:00.387703252Z" level=info msg="CreateContainer within sandbox \"d2f317b5451178eb27a86287c491b45c96fab0464b4a7e54a399c8579d82e3ca\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"229b77a7a076ba1037a931d066c5db0072ffe8eb8cb5d6f18b3d3c71a6b7a6e9\"" Dec 16 03:17:00.388606 containerd[1630]: time="2025-12-16T03:17:00.388586046Z" level=info msg="StartContainer for \"229b77a7a076ba1037a931d066c5db0072ffe8eb8cb5d6f18b3d3c71a6b7a6e9\"" Dec 16 03:17:00.389927 containerd[1630]: time="2025-12-16T03:17:00.389876372Z" level=info msg="connecting to shim 229b77a7a076ba1037a931d066c5db0072ffe8eb8cb5d6f18b3d3c71a6b7a6e9" address="unix:///run/containerd/s/6f3f2853b7142c9d54bbfe73106434c19e7d29ddbe4df46ffb2aa974d38501fc" protocol=ttrpc version=3 Dec 16 03:17:00.413935 systemd[1]: Started cri-containerd-229b77a7a076ba1037a931d066c5db0072ffe8eb8cb5d6f18b3d3c71a6b7a6e9.scope - libcontainer container 229b77a7a076ba1037a931d066c5db0072ffe8eb8cb5d6f18b3d3c71a6b7a6e9. Dec 16 03:17:00.469857 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 16 03:17:00.470067 kernel: audit: type=1334 audit(1765855020.467:570): prog-id=169 op=LOAD Dec 16 03:17:00.467000 audit: BPF prog-id=169 op=LOAD Dec 16 03:17:00.479279 kernel: audit: type=1300 audit(1765855020.467:570): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3365 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:00.467000 audit[3580]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3365 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:00.481811 kernel: audit: type=1327 audit(1765855020.467:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232396237376137613037366261313033376139333164303636633564 Dec 16 03:17:00.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232396237376137613037366261313033376139333164303636633564 Dec 16 03:17:00.467000 audit: BPF prog-id=170 op=LOAD Dec 16 03:17:00.497853 kernel: audit: type=1334 audit(1765855020.467:571): prog-id=170 op=LOAD Dec 16 03:17:00.497932 kernel: audit: type=1300 audit(1765855020.467:571): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3365 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:00.467000 audit[3580]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3365 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:00.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232396237376137613037366261313033376139333164303636633564 Dec 16 03:17:00.507569 kernel: audit: type=1327 audit(1765855020.467:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232396237376137613037366261313033376139333164303636633564 Dec 16 03:17:00.507634 kernel: audit: type=1334 audit(1765855020.467:572): prog-id=170 op=UNLOAD Dec 16 03:17:00.514832 kernel: audit: type=1300 audit(1765855020.467:572): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:00.467000 audit: BPF prog-id=170 op=UNLOAD Dec 16 03:17:00.467000 audit[3580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:00.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232396237376137613037366261313033376139333164303636633564 Dec 16 03:17:00.525361 kernel: audit: type=1327 audit(1765855020.467:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232396237376137613037366261313033376139333164303636633564 Dec 16 03:17:00.525422 kernel: audit: type=1334 audit(1765855020.467:573): prog-id=169 op=UNLOAD Dec 16 03:17:00.467000 audit: BPF prog-id=169 op=UNLOAD Dec 16 03:17:00.467000 audit[3580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:00.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232396237376137613037366261313033376139333164303636633564 Dec 16 03:17:00.467000 audit: BPF prog-id=171 op=LOAD Dec 16 03:17:00.467000 audit[3580]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3365 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:00.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232396237376137613037366261313033376139333164303636633564 Dec 16 03:17:00.543955 containerd[1630]: time="2025-12-16T03:17:00.543887884Z" level=info msg="StartContainer for \"229b77a7a076ba1037a931d066c5db0072ffe8eb8cb5d6f18b3d3c71a6b7a6e9\" returns successfully" Dec 16 03:17:00.966273 systemd[1]: cri-containerd-229b77a7a076ba1037a931d066c5db0072ffe8eb8cb5d6f18b3d3c71a6b7a6e9.scope: Deactivated successfully. Dec 16 03:17:00.967000 audit: BPF prog-id=171 op=UNLOAD Dec 16 03:17:00.967264 systemd[1]: cri-containerd-229b77a7a076ba1037a931d066c5db0072ffe8eb8cb5d6f18b3d3c71a6b7a6e9.scope: Consumed 409ms CPU time, 161M memory peak, 5.9M read from disk, 171.3M written to disk. Dec 16 03:17:00.973110 containerd[1630]: time="2025-12-16T03:17:00.973041766Z" level=info msg="received container exit event container_id:\"229b77a7a076ba1037a931d066c5db0072ffe8eb8cb5d6f18b3d3c71a6b7a6e9\" id:\"229b77a7a076ba1037a931d066c5db0072ffe8eb8cb5d6f18b3d3c71a6b7a6e9\" pid:3593 exited_at:{seconds:1765855020 nanos:967724222}" Dec 16 03:17:01.061901 kubelet[2816]: I1216 03:17:01.061612 2816 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 03:17:01.073761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-229b77a7a076ba1037a931d066c5db0072ffe8eb8cb5d6f18b3d3c71a6b7a6e9-rootfs.mount: Deactivated successfully. Dec 16 03:17:01.138389 systemd[1]: Created slice kubepods-besteffort-podf1de3291_e9ad_40ba_a5da_243422ad4502.slice - libcontainer container kubepods-besteffort-podf1de3291_e9ad_40ba_a5da_243422ad4502.slice. Dec 16 03:17:01.150633 systemd[1]: Created slice kubepods-besteffort-pod83f66e0e_6c09_4937_932e_1ce867d20286.slice - libcontainer container kubepods-besteffort-pod83f66e0e_6c09_4937_932e_1ce867d20286.slice. Dec 16 03:17:01.158805 systemd[1]: Created slice kubepods-besteffort-pod0981f349_361d_45e9_bda1_a29e4e4386d6.slice - libcontainer container kubepods-besteffort-pod0981f349_361d_45e9_bda1_a29e4e4386d6.slice. Dec 16 03:17:01.167623 systemd[1]: Created slice kubepods-besteffort-podf3b4d493_b815_435b_8539_393930301f5a.slice - libcontainer container kubepods-besteffort-podf3b4d493_b815_435b_8539_393930301f5a.slice. Dec 16 03:17:01.174265 systemd[1]: Created slice kubepods-burstable-podc4b190c6_e978_4cd1_9872_1d44369dd5d3.slice - libcontainer container kubepods-burstable-podc4b190c6_e978_4cd1_9872_1d44369dd5d3.slice. Dec 16 03:17:01.180921 systemd[1]: Created slice kubepods-burstable-pod71ed773f_2e6f_481b_a8da_215c519a3532.slice - libcontainer container kubepods-burstable-pod71ed773f_2e6f_481b_a8da_215c519a3532.slice. Dec 16 03:17:01.186464 systemd[1]: Created slice kubepods-besteffort-pod2deef273_b182_480c_9527_049c0bd96660.slice - libcontainer container kubepods-besteffort-pod2deef273_b182_480c_9527_049c0bd96660.slice. Dec 16 03:17:01.195934 kubelet[2816]: I1216 03:17:01.195905 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83f66e0e-6c09-4937-932e-1ce867d20286-config\") pod \"goldmane-666569f655-z7p8t\" (UID: \"83f66e0e-6c09-4937-932e-1ce867d20286\") " pod="calico-system/goldmane-666569f655-z7p8t" Dec 16 03:17:01.196177 kubelet[2816]: I1216 03:17:01.196113 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4b190c6-e978-4cd1-9872-1d44369dd5d3-config-volume\") pod \"coredns-668d6bf9bc-lgq5m\" (UID: \"c4b190c6-e978-4cd1-9872-1d44369dd5d3\") " pod="kube-system/coredns-668d6bf9bc-lgq5m" Dec 16 03:17:01.196177 kubelet[2816]: I1216 03:17:01.196143 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8md5d\" (UniqueName: \"kubernetes.io/projected/71ed773f-2e6f-481b-a8da-215c519a3532-kube-api-access-8md5d\") pod \"coredns-668d6bf9bc-j4ql6\" (UID: \"71ed773f-2e6f-481b-a8da-215c519a3532\") " pod="kube-system/coredns-668d6bf9bc-j4ql6" Dec 16 03:17:01.196846 kubelet[2816]: I1216 03:17:01.196270 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2vvf\" (UniqueName: \"kubernetes.io/projected/f3b4d493-b815-435b-8539-393930301f5a-kube-api-access-s2vvf\") pod \"calico-apiserver-9b7f5fc68-6vh4x\" (UID: \"f3b4d493-b815-435b-8539-393930301f5a\") " pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" Dec 16 03:17:01.196846 kubelet[2816]: I1216 03:17:01.196288 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/83f66e0e-6c09-4937-932e-1ce867d20286-goldmane-key-pair\") pod \"goldmane-666569f655-z7p8t\" (UID: \"83f66e0e-6c09-4937-932e-1ce867d20286\") " pod="calico-system/goldmane-666569f655-z7p8t" Dec 16 03:17:01.196846 kubelet[2816]: I1216 03:17:01.196306 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw6rc\" (UniqueName: \"kubernetes.io/projected/f1de3291-e9ad-40ba-a5da-243422ad4502-kube-api-access-lw6rc\") pod \"whisker-596d5b8df9-mf8ch\" (UID: \"f1de3291-e9ad-40ba-a5da-243422ad4502\") " pod="calico-system/whisker-596d5b8df9-mf8ch" Dec 16 03:17:01.196846 kubelet[2816]: I1216 03:17:01.196357 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2deef273-b182-480c-9527-049c0bd96660-tigera-ca-bundle\") pod \"calico-kube-controllers-dcc4656ff-vfbkw\" (UID: \"2deef273-b182-480c-9527-049c0bd96660\") " pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" Dec 16 03:17:01.196846 kubelet[2816]: I1216 03:17:01.196370 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2zrc\" (UniqueName: \"kubernetes.io/projected/c4b190c6-e978-4cd1-9872-1d44369dd5d3-kube-api-access-m2zrc\") pod \"coredns-668d6bf9bc-lgq5m\" (UID: \"c4b190c6-e978-4cd1-9872-1d44369dd5d3\") " pod="kube-system/coredns-668d6bf9bc-lgq5m" Dec 16 03:17:01.196968 kubelet[2816]: I1216 03:17:01.196389 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ed773f-2e6f-481b-a8da-215c519a3532-config-volume\") pod \"coredns-668d6bf9bc-j4ql6\" (UID: \"71ed773f-2e6f-481b-a8da-215c519a3532\") " pod="kube-system/coredns-668d6bf9bc-j4ql6" Dec 16 03:17:01.196968 kubelet[2816]: I1216 03:17:01.196460 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1de3291-e9ad-40ba-a5da-243422ad4502-whisker-backend-key-pair\") pod \"whisker-596d5b8df9-mf8ch\" (UID: \"f1de3291-e9ad-40ba-a5da-243422ad4502\") " pod="calico-system/whisker-596d5b8df9-mf8ch" Dec 16 03:17:01.196968 kubelet[2816]: I1216 03:17:01.196475 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1de3291-e9ad-40ba-a5da-243422ad4502-whisker-ca-bundle\") pod \"whisker-596d5b8df9-mf8ch\" (UID: \"f1de3291-e9ad-40ba-a5da-243422ad4502\") " pod="calico-system/whisker-596d5b8df9-mf8ch" Dec 16 03:17:01.196968 kubelet[2816]: I1216 03:17:01.196521 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f3b4d493-b815-435b-8539-393930301f5a-calico-apiserver-certs\") pod \"calico-apiserver-9b7f5fc68-6vh4x\" (UID: \"f3b4d493-b815-435b-8539-393930301f5a\") " pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" Dec 16 03:17:01.196968 kubelet[2816]: I1216 03:17:01.196535 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83f66e0e-6c09-4937-932e-1ce867d20286-goldmane-ca-bundle\") pod \"goldmane-666569f655-z7p8t\" (UID: \"83f66e0e-6c09-4937-932e-1ce867d20286\") " pod="calico-system/goldmane-666569f655-z7p8t" Dec 16 03:17:01.198151 kubelet[2816]: I1216 03:17:01.196548 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsgrg\" (UniqueName: \"kubernetes.io/projected/0981f349-361d-45e9-bda1-a29e4e4386d6-kube-api-access-hsgrg\") pod \"calico-apiserver-9b7f5fc68-z5vrj\" (UID: \"0981f349-361d-45e9-bda1-a29e4e4386d6\") " pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" Dec 16 03:17:01.198151 kubelet[2816]: I1216 03:17:01.196606 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhglz\" (UniqueName: \"kubernetes.io/projected/83f66e0e-6c09-4937-932e-1ce867d20286-kube-api-access-mhglz\") pod \"goldmane-666569f655-z7p8t\" (UID: \"83f66e0e-6c09-4937-932e-1ce867d20286\") " pod="calico-system/goldmane-666569f655-z7p8t" Dec 16 03:17:01.198151 kubelet[2816]: I1216 03:17:01.196622 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0981f349-361d-45e9-bda1-a29e4e4386d6-calico-apiserver-certs\") pod \"calico-apiserver-9b7f5fc68-z5vrj\" (UID: \"0981f349-361d-45e9-bda1-a29e4e4386d6\") " pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" Dec 16 03:17:01.198151 kubelet[2816]: I1216 03:17:01.196636 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c989\" (UniqueName: \"kubernetes.io/projected/2deef273-b182-480c-9527-049c0bd96660-kube-api-access-5c989\") pod \"calico-kube-controllers-dcc4656ff-vfbkw\" (UID: \"2deef273-b182-480c-9527-049c0bd96660\") " pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" Dec 16 03:17:01.449240 containerd[1630]: time="2025-12-16T03:17:01.449185972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596d5b8df9-mf8ch,Uid:f1de3291-e9ad-40ba-a5da-243422ad4502,Namespace:calico-system,Attempt:0,}" Dec 16 03:17:01.456836 containerd[1630]: time="2025-12-16T03:17:01.456800509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z7p8t,Uid:83f66e0e-6c09-4937-932e-1ce867d20286,Namespace:calico-system,Attempt:0,}" Dec 16 03:17:01.463873 containerd[1630]: time="2025-12-16T03:17:01.463829463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b7f5fc68-z5vrj,Uid:0981f349-361d-45e9-bda1-a29e4e4386d6,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:17:01.475227 containerd[1630]: time="2025-12-16T03:17:01.474240023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b7f5fc68-6vh4x,Uid:f3b4d493-b815-435b-8539-393930301f5a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:17:01.487932 containerd[1630]: time="2025-12-16T03:17:01.487902718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lgq5m,Uid:c4b190c6-e978-4cd1-9872-1d44369dd5d3,Namespace:kube-system,Attempt:0,}" Dec 16 03:17:01.488490 containerd[1630]: time="2025-12-16T03:17:01.488470523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j4ql6,Uid:71ed773f-2e6f-481b-a8da-215c519a3532,Namespace:kube-system,Attempt:0,}" Dec 16 03:17:01.491299 containerd[1630]: time="2025-12-16T03:17:01.491228000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dcc4656ff-vfbkw,Uid:2deef273-b182-480c-9527-049c0bd96660,Namespace:calico-system,Attempt:0,}" Dec 16 03:17:01.743973 containerd[1630]: time="2025-12-16T03:17:01.743433361Z" level=error msg="Failed to destroy network for sandbox \"3351d22198c4b11dc66fdd69da336bce9130e82d4599c0c580f6561034035cbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.768394 containerd[1630]: time="2025-12-16T03:17:01.768103430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596d5b8df9-mf8ch,Uid:f1de3291-e9ad-40ba-a5da-243422ad4502,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3351d22198c4b11dc66fdd69da336bce9130e82d4599c0c580f6561034035cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.778640 containerd[1630]: time="2025-12-16T03:17:01.777157761Z" level=error msg="Failed to destroy network for sandbox \"8f483bad09d1c91c324e7878689efb505f1b7ef11a8076dca9cbc1428b11bc6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.781233 systemd[1]: Created slice kubepods-besteffort-pod8f212018_4b88_48fc_94d2_420427ed0241.slice - libcontainer container kubepods-besteffort-pod8f212018_4b88_48fc_94d2_420427ed0241.slice. Dec 16 03:17:01.783318 containerd[1630]: time="2025-12-16T03:17:01.783234906Z" level=error msg="Failed to destroy network for sandbox \"df8d7aa0ce2642b8439c303133ec8f7fb52b4c008a8e62a507cee0dd398a18a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.784651 kubelet[2816]: E1216 03:17:01.784548 2816 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3351d22198c4b11dc66fdd69da336bce9130e82d4599c0c580f6561034035cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.786906 kubelet[2816]: E1216 03:17:01.786823 2816 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3351d22198c4b11dc66fdd69da336bce9130e82d4599c0c580f6561034035cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-596d5b8df9-mf8ch" Dec 16 03:17:01.786906 kubelet[2816]: E1216 03:17:01.786853 2816 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3351d22198c4b11dc66fdd69da336bce9130e82d4599c0c580f6561034035cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-596d5b8df9-mf8ch" Dec 16 03:17:01.788409 containerd[1630]: time="2025-12-16T03:17:01.788384914Z" level=error msg="Failed to destroy network for sandbox \"465a1280a52bbd077805f80afe6880ce04a7e94c44a4b4209151b8b3b6abb465\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.789052 kubelet[2816]: E1216 03:17:01.788915 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-596d5b8df9-mf8ch_calico-system(f1de3291-e9ad-40ba-a5da-243422ad4502)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-596d5b8df9-mf8ch_calico-system(f1de3291-e9ad-40ba-a5da-243422ad4502)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3351d22198c4b11dc66fdd69da336bce9130e82d4599c0c580f6561034035cbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-596d5b8df9-mf8ch" podUID="f1de3291-e9ad-40ba-a5da-243422ad4502" Dec 16 03:17:01.793948 containerd[1630]: time="2025-12-16T03:17:01.793895175Z" level=error msg="Failed to destroy network for sandbox \"90e1f9ab22f0d529398b118e5083bd0643fd74cc9c084d8ddda2fd1abca20d34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.795391 containerd[1630]: time="2025-12-16T03:17:01.794899300Z" level=error msg="Failed to destroy network for sandbox \"629e67557605f3f670fd8984b9ae0d4cf1b218ea1e88ccd55bad8bccd353d4a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.799596 containerd[1630]: time="2025-12-16T03:17:01.799544841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dcc4656ff-vfbkw,Uid:2deef273-b182-480c-9527-049c0bd96660,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"629e67557605f3f670fd8984b9ae0d4cf1b218ea1e88ccd55bad8bccd353d4a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.801969 containerd[1630]: time="2025-12-16T03:17:01.801798822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lgq5m,Uid:c4b190c6-e978-4cd1-9872-1d44369dd5d3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f483bad09d1c91c324e7878689efb505f1b7ef11a8076dca9cbc1428b11bc6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.802826 kubelet[2816]: E1216 03:17:01.802184 2816 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f483bad09d1c91c324e7878689efb505f1b7ef11a8076dca9cbc1428b11bc6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.802826 kubelet[2816]: E1216 03:17:01.802323 2816 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f483bad09d1c91c324e7878689efb505f1b7ef11a8076dca9cbc1428b11bc6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lgq5m" Dec 16 03:17:01.802826 kubelet[2816]: E1216 03:17:01.802346 2816 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f483bad09d1c91c324e7878689efb505f1b7ef11a8076dca9cbc1428b11bc6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lgq5m" Dec 16 03:17:01.802990 kubelet[2816]: E1216 03:17:01.802381 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lgq5m_kube-system(c4b190c6-e978-4cd1-9872-1d44369dd5d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lgq5m_kube-system(c4b190c6-e978-4cd1-9872-1d44369dd5d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f483bad09d1c91c324e7878689efb505f1b7ef11a8076dca9cbc1428b11bc6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lgq5m" podUID="c4b190c6-e978-4cd1-9872-1d44369dd5d3" Dec 16 03:17:01.802990 kubelet[2816]: E1216 03:17:01.802817 2816 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"629e67557605f3f670fd8984b9ae0d4cf1b218ea1e88ccd55bad8bccd353d4a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.802990 kubelet[2816]: E1216 03:17:01.802881 2816 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"629e67557605f3f670fd8984b9ae0d4cf1b218ea1e88ccd55bad8bccd353d4a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" Dec 16 03:17:01.803088 kubelet[2816]: E1216 03:17:01.802901 2816 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"629e67557605f3f670fd8984b9ae0d4cf1b218ea1e88ccd55bad8bccd353d4a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" Dec 16 03:17:01.803088 kubelet[2816]: E1216 03:17:01.802928 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dcc4656ff-vfbkw_calico-system(2deef273-b182-480c-9527-049c0bd96660)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dcc4656ff-vfbkw_calico-system(2deef273-b182-480c-9527-049c0bd96660)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"629e67557605f3f670fd8984b9ae0d4cf1b218ea1e88ccd55bad8bccd353d4a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:17:01.806339 containerd[1630]: time="2025-12-16T03:17:01.806284145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b7f5fc68-z5vrj,Uid:0981f349-361d-45e9-bda1-a29e4e4386d6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df8d7aa0ce2642b8439c303133ec8f7fb52b4c008a8e62a507cee0dd398a18a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.808354 containerd[1630]: time="2025-12-16T03:17:01.807996021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vbf4g,Uid:8f212018-4b88-48fc-94d2-420427ed0241,Namespace:calico-system,Attempt:0,}" Dec 16 03:17:01.808414 kubelet[2816]: E1216 03:17:01.808084 2816 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df8d7aa0ce2642b8439c303133ec8f7fb52b4c008a8e62a507cee0dd398a18a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.808414 kubelet[2816]: E1216 03:17:01.808128 2816 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df8d7aa0ce2642b8439c303133ec8f7fb52b4c008a8e62a507cee0dd398a18a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" Dec 16 03:17:01.808414 kubelet[2816]: E1216 03:17:01.808146 2816 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df8d7aa0ce2642b8439c303133ec8f7fb52b4c008a8e62a507cee0dd398a18a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" Dec 16 03:17:01.808487 kubelet[2816]: E1216 03:17:01.808173 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9b7f5fc68-z5vrj_calico-apiserver(0981f349-361d-45e9-bda1-a29e4e4386d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9b7f5fc68-z5vrj_calico-apiserver(0981f349-361d-45e9-bda1-a29e4e4386d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df8d7aa0ce2642b8439c303133ec8f7fb52b4c008a8e62a507cee0dd398a18a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:17:01.808952 containerd[1630]: time="2025-12-16T03:17:01.808730967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z7p8t,Uid:83f66e0e-6c09-4937-932e-1ce867d20286,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"465a1280a52bbd077805f80afe6880ce04a7e94c44a4b4209151b8b3b6abb465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.809128 kubelet[2816]: E1216 03:17:01.808864 2816 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"465a1280a52bbd077805f80afe6880ce04a7e94c44a4b4209151b8b3b6abb465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.809128 kubelet[2816]: E1216 03:17:01.808887 2816 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"465a1280a52bbd077805f80afe6880ce04a7e94c44a4b4209151b8b3b6abb465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-z7p8t" Dec 16 03:17:01.809128 kubelet[2816]: E1216 03:17:01.808899 2816 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"465a1280a52bbd077805f80afe6880ce04a7e94c44a4b4209151b8b3b6abb465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-z7p8t" Dec 16 03:17:01.809200 kubelet[2816]: E1216 03:17:01.808924 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-z7p8t_calico-system(83f66e0e-6c09-4937-932e-1ce867d20286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-z7p8t_calico-system(83f66e0e-6c09-4937-932e-1ce867d20286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"465a1280a52bbd077805f80afe6880ce04a7e94c44a4b4209151b8b3b6abb465\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:17:01.809779 containerd[1630]: time="2025-12-16T03:17:01.809589647Z" level=error msg="Failed to destroy network for sandbox \"d9e08d06bfbb83ab0d209e7cbcf0913fbe7a0a067daffb6831aa9770c8a55938\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.810446 containerd[1630]: time="2025-12-16T03:17:01.810417833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j4ql6,Uid:71ed773f-2e6f-481b-a8da-215c519a3532,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90e1f9ab22f0d529398b118e5083bd0643fd74cc9c084d8ddda2fd1abca20d34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.812674 kubelet[2816]: E1216 03:17:01.812608 2816 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90e1f9ab22f0d529398b118e5083bd0643fd74cc9c084d8ddda2fd1abca20d34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.812674 kubelet[2816]: E1216 03:17:01.812640 2816 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90e1f9ab22f0d529398b118e5083bd0643fd74cc9c084d8ddda2fd1abca20d34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j4ql6" Dec 16 03:17:01.812674 kubelet[2816]: E1216 03:17:01.812652 2816 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90e1f9ab22f0d529398b118e5083bd0643fd74cc9c084d8ddda2fd1abca20d34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j4ql6" Dec 16 03:17:01.812791 kubelet[2816]: E1216 03:17:01.812686 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j4ql6_kube-system(71ed773f-2e6f-481b-a8da-215c519a3532)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j4ql6_kube-system(71ed773f-2e6f-481b-a8da-215c519a3532)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90e1f9ab22f0d529398b118e5083bd0643fd74cc9c084d8ddda2fd1abca20d34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j4ql6" podUID="71ed773f-2e6f-481b-a8da-215c519a3532" Dec 16 03:17:01.813701 containerd[1630]: time="2025-12-16T03:17:01.813610244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b7f5fc68-6vh4x,Uid:f3b4d493-b815-435b-8539-393930301f5a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9e08d06bfbb83ab0d209e7cbcf0913fbe7a0a067daffb6831aa9770c8a55938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.814387 kubelet[2816]: E1216 03:17:01.814043 2816 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9e08d06bfbb83ab0d209e7cbcf0913fbe7a0a067daffb6831aa9770c8a55938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.814387 kubelet[2816]: E1216 03:17:01.814071 2816 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9e08d06bfbb83ab0d209e7cbcf0913fbe7a0a067daffb6831aa9770c8a55938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" Dec 16 03:17:01.814387 kubelet[2816]: E1216 03:17:01.814082 2816 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9e08d06bfbb83ab0d209e7cbcf0913fbe7a0a067daffb6831aa9770c8a55938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" Dec 16 03:17:01.814469 kubelet[2816]: E1216 03:17:01.814104 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9b7f5fc68-6vh4x_calico-apiserver(f3b4d493-b815-435b-8539-393930301f5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9b7f5fc68-6vh4x_calico-apiserver(f3b4d493-b815-435b-8539-393930301f5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9e08d06bfbb83ab0d209e7cbcf0913fbe7a0a067daffb6831aa9770c8a55938\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:17:01.858795 containerd[1630]: time="2025-12-16T03:17:01.858673095Z" level=error msg="Failed to destroy network for sandbox \"ee2ea5fd40a60da01b01851c97743f9dfe7ad35c5512c6d0dc74b851331253b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.861183 containerd[1630]: time="2025-12-16T03:17:01.861089134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vbf4g,Uid:8f212018-4b88-48fc-94d2-420427ed0241,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee2ea5fd40a60da01b01851c97743f9dfe7ad35c5512c6d0dc74b851331253b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.861417 kubelet[2816]: E1216 03:17:01.861374 2816 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee2ea5fd40a60da01b01851c97743f9dfe7ad35c5512c6d0dc74b851331253b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:17:01.861494 kubelet[2816]: E1216 03:17:01.861435 2816 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee2ea5fd40a60da01b01851c97743f9dfe7ad35c5512c6d0dc74b851331253b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vbf4g" Dec 16 03:17:01.861494 kubelet[2816]: E1216 03:17:01.861453 2816 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee2ea5fd40a60da01b01851c97743f9dfe7ad35c5512c6d0dc74b851331253b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vbf4g" Dec 16 03:17:01.861538 kubelet[2816]: E1216 03:17:01.861506 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vbf4g_calico-system(8f212018-4b88-48fc-94d2-420427ed0241)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vbf4g_calico-system(8f212018-4b88-48fc-94d2-420427ed0241)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee2ea5fd40a60da01b01851c97743f9dfe7ad35c5512c6d0dc74b851331253b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:17:01.878968 containerd[1630]: time="2025-12-16T03:17:01.878938484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 03:17:02.364397 systemd[1]: run-netns-cni\x2dea7e3c4d\x2d8d1a\x2d95ad\x2d45a9\x2da73eb2dafb16.mount: Deactivated successfully. Dec 16 03:17:02.364527 systemd[1]: run-netns-cni\x2dafe10afb\x2dba66\x2d64c5\x2d3bb7\x2d15ae9c5aa10f.mount: Deactivated successfully. Dec 16 03:17:02.364629 systemd[1]: run-netns-cni\x2da4df941d\x2d45d9\x2d577c\x2d9202\x2d43b42462901b.mount: Deactivated successfully. Dec 16 03:17:02.364683 systemd[1]: run-netns-cni\x2d85f221e6\x2d2264\x2d0a69\x2dd2f1\x2daaf0d5c60dd0.mount: Deactivated successfully. Dec 16 03:17:09.553849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3637477108.mount: Deactivated successfully. Dec 16 03:17:09.606832 containerd[1630]: time="2025-12-16T03:17:09.606002003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 03:17:09.609291 containerd[1630]: time="2025-12-16T03:17:09.588146772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:09.618453 containerd[1630]: time="2025-12-16T03:17:09.618409704Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:09.619466 containerd[1630]: time="2025-12-16T03:17:09.619012287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:17:09.619466 containerd[1630]: time="2025-12-16T03:17:09.619367727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.740391465s" Dec 16 03:17:09.619466 containerd[1630]: time="2025-12-16T03:17:09.619390732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 03:17:09.653893 containerd[1630]: time="2025-12-16T03:17:09.653800872Z" level=info msg="CreateContainer within sandbox \"d2f317b5451178eb27a86287c491b45c96fab0464b4a7e54a399c8579d82e3ca\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 03:17:09.689685 containerd[1630]: time="2025-12-16T03:17:09.686984063Z" level=info msg="Container 06f2f84af3b7d20f0802d5ef26558194fc2a9fae4d08df251ac9c91dc0d1f542: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:17:09.688300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4102406625.mount: Deactivated successfully. Dec 16 03:17:09.738071 containerd[1630]: time="2025-12-16T03:17:09.737969089Z" level=info msg="CreateContainer within sandbox \"d2f317b5451178eb27a86287c491b45c96fab0464b4a7e54a399c8579d82e3ca\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"06f2f84af3b7d20f0802d5ef26558194fc2a9fae4d08df251ac9c91dc0d1f542\"" Dec 16 03:17:09.740035 containerd[1630]: time="2025-12-16T03:17:09.740002818Z" level=info msg="StartContainer for \"06f2f84af3b7d20f0802d5ef26558194fc2a9fae4d08df251ac9c91dc0d1f542\"" Dec 16 03:17:09.747597 containerd[1630]: time="2025-12-16T03:17:09.747539637Z" level=info msg="connecting to shim 06f2f84af3b7d20f0802d5ef26558194fc2a9fae4d08df251ac9c91dc0d1f542" address="unix:///run/containerd/s/6f3f2853b7142c9d54bbfe73106434c19e7d29ddbe4df46ffb2aa974d38501fc" protocol=ttrpc version=3 Dec 16 03:17:09.844936 systemd[1]: Started cri-containerd-06f2f84af3b7d20f0802d5ef26558194fc2a9fae4d08df251ac9c91dc0d1f542.scope - libcontainer container 06f2f84af3b7d20f0802d5ef26558194fc2a9fae4d08df251ac9c91dc0d1f542. Dec 16 03:17:09.892788 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 03:17:09.893485 kernel: audit: type=1334 audit(1765855029.889:576): prog-id=172 op=LOAD Dec 16 03:17:09.889000 audit: BPF prog-id=172 op=LOAD Dec 16 03:17:09.889000 audit[3862]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019e488 a2=98 a3=0 items=0 ppid=3365 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:09.904780 kernel: audit: type=1300 audit(1765855029.889:576): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019e488 a2=98 a3=0 items=0 ppid=3365 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:09.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036663266383461663362376432306630383032643565663236353538 Dec 16 03:17:09.915022 kernel: audit: type=1327 audit(1765855029.889:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036663266383461663362376432306630383032643565663236353538 Dec 16 03:17:09.915083 kernel: audit: type=1334 audit(1765855029.891:577): prog-id=173 op=LOAD Dec 16 03:17:09.891000 audit: BPF prog-id=173 op=LOAD Dec 16 03:17:09.920173 kernel: audit: type=1300 audit(1765855029.891:577): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00019e218 a2=98 a3=0 items=0 ppid=3365 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:09.891000 audit[3862]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00019e218 a2=98 a3=0 items=0 ppid=3365 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:09.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036663266383461663362376432306630383032643565663236353538 Dec 16 03:17:09.926528 kernel: audit: type=1327 audit(1765855029.891:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036663266383461663362376432306630383032643565663236353538 Dec 16 03:17:09.933246 kernel: audit: type=1334 audit(1765855029.891:578): prog-id=173 op=UNLOAD Dec 16 03:17:09.891000 audit: BPF prog-id=173 op=UNLOAD Dec 16 03:17:09.939809 kernel: audit: type=1300 audit(1765855029.891:578): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:09.891000 audit[3862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:09.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036663266383461663362376432306630383032643565663236353538 Dec 16 03:17:09.945303 containerd[1630]: time="2025-12-16T03:17:09.945267407Z" level=info msg="StartContainer for \"06f2f84af3b7d20f0802d5ef26558194fc2a9fae4d08df251ac9c91dc0d1f542\" returns successfully" Dec 16 03:17:09.891000 audit: BPF prog-id=172 op=UNLOAD Dec 16 03:17:09.952873 kernel: audit: type=1327 audit(1765855029.891:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036663266383461663362376432306630383032643565663236353538 Dec 16 03:17:09.952947 kernel: audit: type=1334 audit(1765855029.891:579): prog-id=172 op=UNLOAD Dec 16 03:17:09.891000 audit[3862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3365 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:09.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036663266383461663362376432306630383032643565663236353538 Dec 16 03:17:09.891000 audit: BPF prog-id=174 op=LOAD Dec 16 03:17:09.891000 audit[3862]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019e6e8 a2=98 a3=0 items=0 ppid=3365 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:09.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036663266383461663362376432306630383032643565663236353538 Dec 16 03:17:10.196536 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 03:17:10.196690 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 03:17:10.566923 kubelet[2816]: I1216 03:17:10.566897 2816 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw6rc\" (UniqueName: \"kubernetes.io/projected/f1de3291-e9ad-40ba-a5da-243422ad4502-kube-api-access-lw6rc\") pod \"f1de3291-e9ad-40ba-a5da-243422ad4502\" (UID: \"f1de3291-e9ad-40ba-a5da-243422ad4502\") " Dec 16 03:17:10.567748 kubelet[2816]: I1216 03:17:10.567276 2816 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1de3291-e9ad-40ba-a5da-243422ad4502-whisker-backend-key-pair\") pod \"f1de3291-e9ad-40ba-a5da-243422ad4502\" (UID: \"f1de3291-e9ad-40ba-a5da-243422ad4502\") " Dec 16 03:17:10.567748 kubelet[2816]: I1216 03:17:10.567302 2816 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1de3291-e9ad-40ba-a5da-243422ad4502-whisker-ca-bundle\") pod \"f1de3291-e9ad-40ba-a5da-243422ad4502\" (UID: \"f1de3291-e9ad-40ba-a5da-243422ad4502\") " Dec 16 03:17:10.567748 kubelet[2816]: I1216 03:17:10.567587 2816 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1de3291-e9ad-40ba-a5da-243422ad4502-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f1de3291-e9ad-40ba-a5da-243422ad4502" (UID: "f1de3291-e9ad-40ba-a5da-243422ad4502"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 03:17:10.573556 systemd[1]: var-lib-kubelet-pods-f1de3291\x2de9ad\x2d40ba\x2da5da\x2d243422ad4502-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 03:17:10.577445 kubelet[2816]: I1216 03:17:10.577326 2816 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1de3291-e9ad-40ba-a5da-243422ad4502-kube-api-access-lw6rc" (OuterVolumeSpecName: "kube-api-access-lw6rc") pod "f1de3291-e9ad-40ba-a5da-243422ad4502" (UID: "f1de3291-e9ad-40ba-a5da-243422ad4502"). InnerVolumeSpecName "kube-api-access-lw6rc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 03:17:10.577919 kubelet[2816]: I1216 03:17:10.577743 2816 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1de3291-e9ad-40ba-a5da-243422ad4502-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f1de3291-e9ad-40ba-a5da-243422ad4502" (UID: "f1de3291-e9ad-40ba-a5da-243422ad4502"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 03:17:10.578520 systemd[1]: var-lib-kubelet-pods-f1de3291\x2de9ad\x2d40ba\x2da5da\x2d243422ad4502-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlw6rc.mount: Deactivated successfully. Dec 16 03:17:10.668222 kubelet[2816]: I1216 03:17:10.668158 2816 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1de3291-e9ad-40ba-a5da-243422ad4502-whisker-backend-key-pair\") on node \"ci-4547-0-0-6-1137cb7bd3\" DevicePath \"\"" Dec 16 03:17:10.668222 kubelet[2816]: I1216 03:17:10.668188 2816 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1de3291-e9ad-40ba-a5da-243422ad4502-whisker-ca-bundle\") on node \"ci-4547-0-0-6-1137cb7bd3\" DevicePath \"\"" Dec 16 03:17:10.668523 kubelet[2816]: I1216 03:17:10.668305 2816 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lw6rc\" (UniqueName: \"kubernetes.io/projected/f1de3291-e9ad-40ba-a5da-243422ad4502-kube-api-access-lw6rc\") on node \"ci-4547-0-0-6-1137cb7bd3\" DevicePath \"\"" Dec 16 03:17:10.753641 systemd[1]: Removed slice kubepods-besteffort-podf1de3291_e9ad_40ba_a5da_243422ad4502.slice - libcontainer container kubepods-besteffort-podf1de3291_e9ad_40ba_a5da_243422ad4502.slice. Dec 16 03:17:10.969105 kubelet[2816]: I1216 03:17:10.969035 2816 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vrxzh" podStartSLOduration=2.018343971 podStartE2EDuration="21.969008699s" podCreationTimestamp="2025-12-16 03:16:49 +0000 UTC" firstStartedPulling="2025-12-16 03:16:49.669553726 +0000 UTC m=+23.038444434" lastFinishedPulling="2025-12-16 03:17:09.620218454 +0000 UTC m=+42.989109162" observedRunningTime="2025-12-16 03:17:10.952797168 +0000 UTC m=+44.321687906" watchObservedRunningTime="2025-12-16 03:17:10.969008699 +0000 UTC m=+44.337899476" Dec 16 03:17:11.071805 kubelet[2816]: I1216 03:17:11.070569 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cefbc96-7243-4733-b6b3-1ddb2b5191b3-whisker-ca-bundle\") pod \"whisker-c6d594499-pd2v8\" (UID: \"4cefbc96-7243-4733-b6b3-1ddb2b5191b3\") " pod="calico-system/whisker-c6d594499-pd2v8" Dec 16 03:17:11.071805 kubelet[2816]: I1216 03:17:11.071102 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4cefbc96-7243-4733-b6b3-1ddb2b5191b3-whisker-backend-key-pair\") pod \"whisker-c6d594499-pd2v8\" (UID: \"4cefbc96-7243-4733-b6b3-1ddb2b5191b3\") " pod="calico-system/whisker-c6d594499-pd2v8" Dec 16 03:17:11.071805 kubelet[2816]: I1216 03:17:11.071202 2816 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj9zd\" (UniqueName: \"kubernetes.io/projected/4cefbc96-7243-4733-b6b3-1ddb2b5191b3-kube-api-access-fj9zd\") pod \"whisker-c6d594499-pd2v8\" (UID: \"4cefbc96-7243-4733-b6b3-1ddb2b5191b3\") " pod="calico-system/whisker-c6d594499-pd2v8" Dec 16 03:17:11.074332 systemd[1]: Created slice kubepods-besteffort-pod4cefbc96_7243_4733_b6b3_1ddb2b5191b3.slice - libcontainer container kubepods-besteffort-pod4cefbc96_7243_4733_b6b3_1ddb2b5191b3.slice. Dec 16 03:17:11.381746 containerd[1630]: time="2025-12-16T03:17:11.381345076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c6d594499-pd2v8,Uid:4cefbc96-7243-4733-b6b3-1ddb2b5191b3,Namespace:calico-system,Attempt:0,}" Dec 16 03:17:11.740307 systemd-networkd[1541]: calid14356d6205: Link UP Dec 16 03:17:11.740679 systemd-networkd[1541]: calid14356d6205: Gained carrier Dec 16 03:17:11.762496 containerd[1630]: 2025-12-16 03:17:11.422 [INFO][3931] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:17:11.762496 containerd[1630]: 2025-12-16 03:17:11.476 [INFO][3931] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--6--1137cb7bd3-k8s-whisker--c6d594499--pd2v8-eth0 whisker-c6d594499- calico-system 4cefbc96-7243-4733-b6b3-1ddb2b5191b3 906 0 2025-12-16 03:17:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c6d594499 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547-0-0-6-1137cb7bd3 whisker-c6d594499-pd2v8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid14356d6205 [] [] }} ContainerID="99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" Namespace="calico-system" Pod="whisker-c6d594499-pd2v8" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-whisker--c6d594499--pd2v8-" Dec 16 03:17:11.762496 containerd[1630]: 2025-12-16 03:17:11.476 [INFO][3931] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" Namespace="calico-system" Pod="whisker-c6d594499-pd2v8" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-whisker--c6d594499--pd2v8-eth0" Dec 16 03:17:11.762496 containerd[1630]: 2025-12-16 03:17:11.672 [INFO][3939] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" HandleID="k8s-pod-network.99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-whisker--c6d594499--pd2v8-eth0" Dec 16 03:17:11.762701 containerd[1630]: 2025-12-16 03:17:11.673 [INFO][3939] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" HandleID="k8s-pod-network.99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-whisker--c6d594499--pd2v8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000335910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-6-1137cb7bd3", "pod":"whisker-c6d594499-pd2v8", "timestamp":"2025-12-16 03:17:11.672493794 +0000 UTC"}, Hostname:"ci-4547-0-0-6-1137cb7bd3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:17:11.762701 containerd[1630]: 2025-12-16 03:17:11.673 [INFO][3939] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:17:11.762701 containerd[1630]: 2025-12-16 03:17:11.675 [INFO][3939] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:17:11.762701 containerd[1630]: 2025-12-16 03:17:11.676 [INFO][3939] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-6-1137cb7bd3' Dec 16 03:17:11.762701 containerd[1630]: 2025-12-16 03:17:11.688 [INFO][3939] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:11.762701 containerd[1630]: 2025-12-16 03:17:11.697 [INFO][3939] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:11.762701 containerd[1630]: 2025-12-16 03:17:11.703 [INFO][3939] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:11.762701 containerd[1630]: 2025-12-16 03:17:11.704 [INFO][3939] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:11.762701 containerd[1630]: 2025-12-16 03:17:11.706 [INFO][3939] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:11.764295 containerd[1630]: 2025-12-16 03:17:11.706 [INFO][3939] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:11.764295 containerd[1630]: 2025-12-16 03:17:11.708 [INFO][3939] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8 Dec 16 03:17:11.764295 containerd[1630]: 2025-12-16 03:17:11.712 [INFO][3939] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:11.764295 containerd[1630]: 2025-12-16 03:17:11.718 [INFO][3939] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.65/26] block=192.168.67.64/26 handle="k8s-pod-network.99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:11.764295 containerd[1630]: 2025-12-16 03:17:11.718 [INFO][3939] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.65/26] handle="k8s-pod-network.99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:11.764295 containerd[1630]: 2025-12-16 03:17:11.718 [INFO][3939] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:17:11.764295 containerd[1630]: 2025-12-16 03:17:11.719 [INFO][3939] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.65/26] IPv6=[] ContainerID="99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" HandleID="k8s-pod-network.99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-whisker--c6d594499--pd2v8-eth0" Dec 16 03:17:11.764409 containerd[1630]: 2025-12-16 03:17:11.723 [INFO][3931] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" Namespace="calico-system" Pod="whisker-c6d594499-pd2v8" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-whisker--c6d594499--pd2v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-whisker--c6d594499--pd2v8-eth0", GenerateName:"whisker-c6d594499-", Namespace:"calico-system", SelfLink:"", UID:"4cefbc96-7243-4733-b6b3-1ddb2b5191b3", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c6d594499", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"", Pod:"whisker-c6d594499-pd2v8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.67.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid14356d6205", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:11.764409 containerd[1630]: 2025-12-16 03:17:11.724 [INFO][3931] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.65/32] ContainerID="99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" Namespace="calico-system" Pod="whisker-c6d594499-pd2v8" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-whisker--c6d594499--pd2v8-eth0" Dec 16 03:17:11.764660 containerd[1630]: 2025-12-16 03:17:11.724 [INFO][3931] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid14356d6205 ContainerID="99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" Namespace="calico-system" Pod="whisker-c6d594499-pd2v8" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-whisker--c6d594499--pd2v8-eth0" Dec 16 03:17:11.764660 containerd[1630]: 2025-12-16 03:17:11.741 [INFO][3931] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" Namespace="calico-system" Pod="whisker-c6d594499-pd2v8" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-whisker--c6d594499--pd2v8-eth0" Dec 16 03:17:11.764718 containerd[1630]: 2025-12-16 03:17:11.741 [INFO][3931] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" Namespace="calico-system" Pod="whisker-c6d594499-pd2v8" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-whisker--c6d594499--pd2v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-whisker--c6d594499--pd2v8-eth0", GenerateName:"whisker-c6d594499-", Namespace:"calico-system", SelfLink:"", UID:"4cefbc96-7243-4733-b6b3-1ddb2b5191b3", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 17, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c6d594499", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8", Pod:"whisker-c6d594499-pd2v8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.67.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid14356d6205", MAC:"b6:88:f8:7c:5f:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:11.764816 containerd[1630]: 2025-12-16 03:17:11.754 [INFO][3931] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" Namespace="calico-system" Pod="whisker-c6d594499-pd2v8" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-whisker--c6d594499--pd2v8-eth0" Dec 16 03:17:11.930943 kubelet[2816]: I1216 03:17:11.929341 2816 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:17:12.060343 containerd[1630]: time="2025-12-16T03:17:12.059811048Z" level=info msg="connecting to shim 99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8" address="unix:///run/containerd/s/aeb64e0b028402bb0b7f20c1bcdf31dc2428a8da03c4b5b9a6272cd6f2925bc1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:17:12.093905 systemd[1]: Started cri-containerd-99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8.scope - libcontainer container 99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8. Dec 16 03:17:12.106000 audit: BPF prog-id=175 op=LOAD Dec 16 03:17:12.106000 audit: BPF prog-id=176 op=LOAD Dec 16 03:17:12.106000 audit[4064]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:12.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643935303735326332343436636164323438343336633335666432 Dec 16 03:17:12.106000 audit: BPF prog-id=176 op=UNLOAD Dec 16 03:17:12.106000 audit[4064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:12.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643935303735326332343436636164323438343336633335666432 Dec 16 03:17:12.107000 audit: BPF prog-id=177 op=LOAD Dec 16 03:17:12.107000 audit[4064]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:12.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643935303735326332343436636164323438343336633335666432 Dec 16 03:17:12.107000 audit: BPF prog-id=178 op=LOAD Dec 16 03:17:12.107000 audit[4064]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:12.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643935303735326332343436636164323438343336633335666432 Dec 16 03:17:12.107000 audit: BPF prog-id=178 op=UNLOAD Dec 16 03:17:12.107000 audit[4064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:12.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643935303735326332343436636164323438343336633335666432 Dec 16 03:17:12.107000 audit: BPF prog-id=177 op=UNLOAD Dec 16 03:17:12.107000 audit[4064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:12.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643935303735326332343436636164323438343336633335666432 Dec 16 03:17:12.107000 audit: BPF prog-id=179 op=LOAD Dec 16 03:17:12.107000 audit[4064]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=4053 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:12.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939643935303735326332343436636164323438343336633335666432 Dec 16 03:17:12.152107 containerd[1630]: time="2025-12-16T03:17:12.151987685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c6d594499-pd2v8,Uid:4cefbc96-7243-4733-b6b3-1ddb2b5191b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"99d950752c2446cad248436c35fd20aabd0255f20e2912bec5841d5822366ca8\"" Dec 16 03:17:12.160337 containerd[1630]: time="2025-12-16T03:17:12.160197351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:17:12.584848 containerd[1630]: time="2025-12-16T03:17:12.584782550Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:12.586416 containerd[1630]: time="2025-12-16T03:17:12.586388757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:17:12.586505 containerd[1630]: time="2025-12-16T03:17:12.586403407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:12.586644 kubelet[2816]: E1216 03:17:12.586601 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:17:12.586710 kubelet[2816]: E1216 03:17:12.586655 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:17:12.592164 kubelet[2816]: E1216 03:17:12.592111 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e489f2e44bb74625b167ec1e6d4af2e7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fj9zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c6d594499-pd2v8_calico-system(4cefbc96-7243-4733-b6b3-1ddb2b5191b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:12.594204 containerd[1630]: time="2025-12-16T03:17:12.594186606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:17:12.736778 containerd[1630]: time="2025-12-16T03:17:12.736375106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vbf4g,Uid:8f212018-4b88-48fc-94d2-420427ed0241,Namespace:calico-system,Attempt:0,}" Dec 16 03:17:12.755524 kubelet[2816]: I1216 03:17:12.755254 2816 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1de3291-e9ad-40ba-a5da-243422ad4502" path="/var/lib/kubelet/pods/f1de3291-e9ad-40ba-a5da-243422ad4502/volumes" Dec 16 03:17:12.876341 systemd-networkd[1541]: cali31ba6ba7680: Link UP Dec 16 03:17:12.877314 systemd-networkd[1541]: cali31ba6ba7680: Gained carrier Dec 16 03:17:12.897565 containerd[1630]: 2025-12-16 03:17:12.796 [INFO][4091] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:17:12.897565 containerd[1630]: 2025-12-16 03:17:12.809 [INFO][4091] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--6--1137cb7bd3-k8s-csi--node--driver--vbf4g-eth0 csi-node-driver- calico-system 8f212018-4b88-48fc-94d2-420427ed0241 723 0 2025-12-16 03:16:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547-0-0-6-1137cb7bd3 csi-node-driver-vbf4g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali31ba6ba7680 [] [] }} ContainerID="94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" Namespace="calico-system" Pod="csi-node-driver-vbf4g" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-csi--node--driver--vbf4g-" Dec 16 03:17:12.897565 containerd[1630]: 2025-12-16 03:17:12.809 [INFO][4091] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" Namespace="calico-system" Pod="csi-node-driver-vbf4g" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-csi--node--driver--vbf4g-eth0" Dec 16 03:17:12.897565 containerd[1630]: 2025-12-16 03:17:12.835 [INFO][4120] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" HandleID="k8s-pod-network.94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-csi--node--driver--vbf4g-eth0" Dec 16 03:17:12.899080 containerd[1630]: 2025-12-16 03:17:12.835 [INFO][4120] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" HandleID="k8s-pod-network.94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-csi--node--driver--vbf4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-6-1137cb7bd3", "pod":"csi-node-driver-vbf4g", "timestamp":"2025-12-16 03:17:12.835136926 +0000 UTC"}, Hostname:"ci-4547-0-0-6-1137cb7bd3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:17:12.899080 containerd[1630]: 2025-12-16 03:17:12.835 [INFO][4120] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:17:12.899080 containerd[1630]: 2025-12-16 03:17:12.835 [INFO][4120] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:17:12.899080 containerd[1630]: 2025-12-16 03:17:12.835 [INFO][4120] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-6-1137cb7bd3' Dec 16 03:17:12.899080 containerd[1630]: 2025-12-16 03:17:12.841 [INFO][4120] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:12.899080 containerd[1630]: 2025-12-16 03:17:12.846 [INFO][4120] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:12.899080 containerd[1630]: 2025-12-16 03:17:12.851 [INFO][4120] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:12.899080 containerd[1630]: 2025-12-16 03:17:12.852 [INFO][4120] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:12.899080 containerd[1630]: 2025-12-16 03:17:12.854 [INFO][4120] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:12.899451 containerd[1630]: 2025-12-16 03:17:12.854 [INFO][4120] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:12.899451 containerd[1630]: 2025-12-16 03:17:12.856 [INFO][4120] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62 Dec 16 03:17:12.899451 containerd[1630]: 2025-12-16 03:17:12.859 [INFO][4120] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:12.899451 containerd[1630]: 2025-12-16 03:17:12.865 [INFO][4120] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.66/26] block=192.168.67.64/26 handle="k8s-pod-network.94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:12.899451 containerd[1630]: 2025-12-16 03:17:12.865 [INFO][4120] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.66/26] handle="k8s-pod-network.94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:12.899451 containerd[1630]: 2025-12-16 03:17:12.865 [INFO][4120] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:17:12.899451 containerd[1630]: 2025-12-16 03:17:12.865 [INFO][4120] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.66/26] IPv6=[] ContainerID="94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" HandleID="k8s-pod-network.94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-csi--node--driver--vbf4g-eth0" Dec 16 03:17:12.899595 containerd[1630]: 2025-12-16 03:17:12.870 [INFO][4091] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" Namespace="calico-system" Pod="csi-node-driver-vbf4g" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-csi--node--driver--vbf4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-csi--node--driver--vbf4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f212018-4b88-48fc-94d2-420427ed0241", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 16, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"", Pod:"csi-node-driver-vbf4g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31ba6ba7680", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:12.899649 containerd[1630]: 2025-12-16 03:17:12.870 [INFO][4091] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.66/32] ContainerID="94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" Namespace="calico-system" Pod="csi-node-driver-vbf4g" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-csi--node--driver--vbf4g-eth0" Dec 16 03:17:12.899649 containerd[1630]: 2025-12-16 03:17:12.870 [INFO][4091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31ba6ba7680 ContainerID="94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" Namespace="calico-system" Pod="csi-node-driver-vbf4g" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-csi--node--driver--vbf4g-eth0" Dec 16 03:17:12.899649 containerd[1630]: 2025-12-16 03:17:12.876 [INFO][4091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" Namespace="calico-system" Pod="csi-node-driver-vbf4g" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-csi--node--driver--vbf4g-eth0" Dec 16 03:17:12.899711 containerd[1630]: 2025-12-16 03:17:12.877 [INFO][4091] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" Namespace="calico-system" Pod="csi-node-driver-vbf4g" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-csi--node--driver--vbf4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-csi--node--driver--vbf4g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8f212018-4b88-48fc-94d2-420427ed0241", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 16, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62", Pod:"csi-node-driver-vbf4g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31ba6ba7680", MAC:"8e:59:71:5c:3b:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:12.900159 containerd[1630]: 2025-12-16 03:17:12.894 [INFO][4091] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" Namespace="calico-system" Pod="csi-node-driver-vbf4g" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-csi--node--driver--vbf4g-eth0" Dec 16 03:17:12.937837 containerd[1630]: time="2025-12-16T03:17:12.937775389Z" level=info msg="connecting to shim 94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62" address="unix:///run/containerd/s/cbe203d55f54ccbaaefb52284ff5d287c488d959745fa38e905d315d723335c5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:17:12.965768 systemd[1]: Started cri-containerd-94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62.scope - libcontainer container 94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62. Dec 16 03:17:12.986000 audit: BPF prog-id=180 op=LOAD Dec 16 03:17:12.987000 audit: BPF prog-id=181 op=LOAD Dec 16 03:17:12.987000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4147 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:12.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934383832616636396533653237656563306233306138353030646134 Dec 16 03:17:12.987000 audit: BPF prog-id=181 op=UNLOAD Dec 16 03:17:12.987000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4147 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:12.987000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934383832616636396533653237656563306233306138353030646134 Dec 16 03:17:12.988000 audit: BPF prog-id=182 op=LOAD Dec 16 03:17:12.988000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4147 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:12.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934383832616636396533653237656563306233306138353030646134 Dec 16 03:17:12.988000 audit: BPF prog-id=183 op=LOAD Dec 16 03:17:12.988000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4147 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:12.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934383832616636396533653237656563306233306138353030646134 Dec 16 03:17:12.988000 audit: BPF prog-id=183 op=UNLOAD Dec 16 03:17:12.988000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4147 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:12.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934383832616636396533653237656563306233306138353030646134 Dec 16 03:17:12.988000 audit: BPF prog-id=182 op=UNLOAD Dec 16 03:17:12.988000 audit[4165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4147 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:12.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934383832616636396533653237656563306233306138353030646134 Dec 16 03:17:12.988000 audit: BPF prog-id=184 op=LOAD Dec 16 03:17:12.988000 audit[4165]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4147 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:12.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934383832616636396533653237656563306233306138353030646134 Dec 16 03:17:13.006278 containerd[1630]: time="2025-12-16T03:17:13.006179535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vbf4g,Uid:8f212018-4b88-48fc-94d2-420427ed0241,Namespace:calico-system,Attempt:0,} returns sandbox id \"94882af69e3e27eec0b30a8500da4886e954da7411e766ec3cef1b6aeba62c62\"" Dec 16 03:17:13.025950 containerd[1630]: time="2025-12-16T03:17:13.025915199Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:13.027315 containerd[1630]: time="2025-12-16T03:17:13.027017152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:17:13.027445 containerd[1630]: time="2025-12-16T03:17:13.027163934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:13.027815 kubelet[2816]: E1216 03:17:13.027643 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:17:13.028037 kubelet[2816]: E1216 03:17:13.027822 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:17:13.028901 containerd[1630]: time="2025-12-16T03:17:13.028727833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:17:13.028945 kubelet[2816]: E1216 03:17:13.028478 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fj9zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c6d594499-pd2v8_calico-system(4cefbc96-7243-4733-b6b3-1ddb2b5191b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:13.029623 kubelet[2816]: E1216 03:17:13.029583 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c6d594499-pd2v8" podUID="4cefbc96-7243-4733-b6b3-1ddb2b5191b3" Dec 16 03:17:13.478208 systemd-networkd[1541]: calid14356d6205: Gained IPv6LL Dec 16 03:17:13.492969 containerd[1630]: time="2025-12-16T03:17:13.492715779Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:13.508696 containerd[1630]: time="2025-12-16T03:17:13.507622292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:17:13.508696 containerd[1630]: time="2025-12-16T03:17:13.507802098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:13.509669 kubelet[2816]: E1216 03:17:13.509114 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:17:13.509669 kubelet[2816]: E1216 03:17:13.509199 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:17:13.509669 kubelet[2816]: E1216 03:17:13.509348 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmlln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vbf4g_calico-system(8f212018-4b88-48fc-94d2-420427ed0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:13.513574 containerd[1630]: time="2025-12-16T03:17:13.513521603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:17:13.735681 containerd[1630]: time="2025-12-16T03:17:13.735011113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b7f5fc68-6vh4x,Uid:f3b4d493-b815-435b-8539-393930301f5a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:17:13.817571 kubelet[2816]: I1216 03:17:13.817523 2816 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:17:13.840822 systemd-networkd[1541]: cali6f3863f35a2: Link UP Dec 16 03:17:13.841006 systemd-networkd[1541]: cali6f3863f35a2: Gained carrier Dec 16 03:17:13.865290 containerd[1630]: 2025-12-16 03:17:13.758 [INFO][4246] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:17:13.865290 containerd[1630]: 2025-12-16 03:17:13.768 [INFO][4246] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--6vh4x-eth0 calico-apiserver-9b7f5fc68- calico-apiserver f3b4d493-b815-435b-8539-393930301f5a 834 0 2025-12-16 03:16:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9b7f5fc68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-6-1137cb7bd3 calico-apiserver-9b7f5fc68-6vh4x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6f3863f35a2 [] [] }} ContainerID="91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-6vh4x" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--6vh4x-" Dec 16 03:17:13.865290 containerd[1630]: 2025-12-16 03:17:13.768 [INFO][4246] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-6vh4x" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--6vh4x-eth0" Dec 16 03:17:13.865290 containerd[1630]: 2025-12-16 03:17:13.792 [INFO][4259] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" HandleID="k8s-pod-network.91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--6vh4x-eth0" Dec 16 03:17:13.865477 containerd[1630]: 2025-12-16 03:17:13.792 [INFO][4259] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" HandleID="k8s-pod-network.91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--6vh4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f180), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-6-1137cb7bd3", "pod":"calico-apiserver-9b7f5fc68-6vh4x", "timestamp":"2025-12-16 03:17:13.792012221 +0000 UTC"}, Hostname:"ci-4547-0-0-6-1137cb7bd3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:17:13.865477 containerd[1630]: 2025-12-16 03:17:13.792 [INFO][4259] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:17:13.865477 containerd[1630]: 2025-12-16 03:17:13.792 [INFO][4259] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:17:13.865477 containerd[1630]: 2025-12-16 03:17:13.792 [INFO][4259] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-6-1137cb7bd3' Dec 16 03:17:13.865477 containerd[1630]: 2025-12-16 03:17:13.804 [INFO][4259] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:13.865477 containerd[1630]: 2025-12-16 03:17:13.808 [INFO][4259] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:13.865477 containerd[1630]: 2025-12-16 03:17:13.813 [INFO][4259] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:13.865477 containerd[1630]: 2025-12-16 03:17:13.814 [INFO][4259] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:13.865477 containerd[1630]: 2025-12-16 03:17:13.817 [INFO][4259] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:13.866342 containerd[1630]: 2025-12-16 03:17:13.817 [INFO][4259] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:13.866342 containerd[1630]: 2025-12-16 03:17:13.819 [INFO][4259] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56 Dec 16 03:17:13.866342 containerd[1630]: 2025-12-16 03:17:13.824 [INFO][4259] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:13.866342 containerd[1630]: 2025-12-16 03:17:13.831 [INFO][4259] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.67/26] block=192.168.67.64/26 handle="k8s-pod-network.91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:13.866342 containerd[1630]: 2025-12-16 03:17:13.831 [INFO][4259] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.67/26] handle="k8s-pod-network.91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:13.866342 containerd[1630]: 2025-12-16 03:17:13.831 [INFO][4259] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:17:13.866342 containerd[1630]: 2025-12-16 03:17:13.832 [INFO][4259] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.67/26] IPv6=[] ContainerID="91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" HandleID="k8s-pod-network.91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--6vh4x-eth0" Dec 16 03:17:13.866616 containerd[1630]: 2025-12-16 03:17:13.836 [INFO][4246] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-6vh4x" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--6vh4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--6vh4x-eth0", GenerateName:"calico-apiserver-9b7f5fc68-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3b4d493-b815-435b-8539-393930301f5a", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b7f5fc68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"", Pod:"calico-apiserver-9b7f5fc68-6vh4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f3863f35a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:13.866679 containerd[1630]: 2025-12-16 03:17:13.836 [INFO][4246] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.67/32] ContainerID="91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-6vh4x" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--6vh4x-eth0" Dec 16 03:17:13.866679 containerd[1630]: 2025-12-16 03:17:13.836 [INFO][4246] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f3863f35a2 ContainerID="91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-6vh4x" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--6vh4x-eth0" Dec 16 03:17:13.866679 containerd[1630]: 2025-12-16 03:17:13.840 [INFO][4246] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-6vh4x" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--6vh4x-eth0" Dec 16 03:17:13.866732 containerd[1630]: 2025-12-16 03:17:13.842 [INFO][4246] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-6vh4x" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--6vh4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--6vh4x-eth0", GenerateName:"calico-apiserver-9b7f5fc68-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3b4d493-b815-435b-8539-393930301f5a", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b7f5fc68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56", Pod:"calico-apiserver-9b7f5fc68-6vh4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f3863f35a2", MAC:"ee:35:eb:d2:68:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:13.868260 containerd[1630]: 2025-12-16 03:17:13.863 [INFO][4246] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-6vh4x" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--6vh4x-eth0" Dec 16 03:17:13.895102 containerd[1630]: time="2025-12-16T03:17:13.894945490Z" level=info msg="connecting to shim 91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56" address="unix:///run/containerd/s/b277ba66a2d88e2d5b941954ae2d1e6b8ce850b6759fc2808f39b8cbf2bb0762" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:17:13.921950 systemd[1]: Started cri-containerd-91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56.scope - libcontainer container 91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56. Dec 16 03:17:13.929000 audit: BPF prog-id=185 op=LOAD Dec 16 03:17:13.930000 audit: BPF prog-id=186 op=LOAD Dec 16 03:17:13.930000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4282 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:13.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931663335656633633761313435366633633531643262626465653133 Dec 16 03:17:13.930000 audit: BPF prog-id=186 op=UNLOAD Dec 16 03:17:13.930000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4282 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:13.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931663335656633633761313435366633633531643262626465653133 Dec 16 03:17:13.930000 audit: BPF prog-id=187 op=LOAD Dec 16 03:17:13.930000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4282 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:13.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931663335656633633761313435366633633531643262626465653133 Dec 16 03:17:13.930000 audit: BPF prog-id=188 op=LOAD Dec 16 03:17:13.930000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4282 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:13.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931663335656633633761313435366633633531643262626465653133 Dec 16 03:17:13.930000 audit: BPF prog-id=188 op=UNLOAD Dec 16 03:17:13.930000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4282 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:13.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931663335656633633761313435366633633531643262626465653133 Dec 16 03:17:13.930000 audit: BPF prog-id=187 op=UNLOAD Dec 16 03:17:13.930000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4282 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:13.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931663335656633633761313435366633633531643262626465653133 Dec 16 03:17:13.930000 audit: BPF prog-id=189 op=LOAD Dec 16 03:17:13.930000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4282 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:13.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3931663335656633633761313435366633633531643262626465653133 Dec 16 03:17:13.943000 audit[4312]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=4312 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:13.943000 audit[4312]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc61796160 a2=0 a3=7ffc6179614c items=0 ppid=2942 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:13.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:13.947000 audit[4312]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=4312 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:13.947000 audit[4312]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc61796160 a2=0 a3=7ffc6179614c items=0 ppid=2942 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:13.947000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:13.949709 kubelet[2816]: E1216 03:17:13.949387 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c6d594499-pd2v8" podUID="4cefbc96-7243-4733-b6b3-1ddb2b5191b3" Dec 16 03:17:13.952763 containerd[1630]: time="2025-12-16T03:17:13.952686975Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:13.954424 containerd[1630]: time="2025-12-16T03:17:13.954265524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:17:13.954895 containerd[1630]: time="2025-12-16T03:17:13.954325923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:13.955258 kubelet[2816]: E1216 03:17:13.955237 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:17:13.955647 kubelet[2816]: E1216 03:17:13.955631 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:17:13.955882 kubelet[2816]: E1216 03:17:13.955849 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmlln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vbf4g_calico-system(8f212018-4b88-48fc-94d2-420427ed0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:13.957013 kubelet[2816]: E1216 03:17:13.956974 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:17:13.973000 audit[4315]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:13.973000 audit[4315]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd339c3cd0 a2=0 a3=7ffd339c3cbc items=0 ppid=2942 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:13.973000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:13.979000 audit[4315]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:13.979000 audit[4315]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd339c3cd0 a2=0 a3=0 items=0 ppid=2942 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:13.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:13.982899 containerd[1630]: time="2025-12-16T03:17:13.982857398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b7f5fc68-6vh4x,Uid:f3b4d493-b815-435b-8539-393930301f5a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"91f35ef3c7a1456f3c51d2bbdee13e4b560acac0c6cae718e551e4c81fe0fa56\"" Dec 16 03:17:13.985286 containerd[1630]: time="2025-12-16T03:17:13.985270711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:17:14.317000 audit: BPF prog-id=190 op=LOAD Dec 16 03:17:14.317000 audit[4360]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcb18bdf50 a2=98 a3=1fffffffffffffff items=0 ppid=4321 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:17:14.319000 audit: BPF prog-id=190 op=UNLOAD Dec 16 03:17:14.319000 audit[4360]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcb18bdf20 a3=0 items=0 ppid=4321 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.319000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:17:14.322000 audit: BPF prog-id=191 op=LOAD Dec 16 03:17:14.322000 audit[4360]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcb18bde30 a2=94 a3=3 items=0 ppid=4321 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.322000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:17:14.322000 audit: BPF prog-id=191 op=UNLOAD Dec 16 03:17:14.322000 audit[4360]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcb18bde30 a2=94 a3=3 items=0 ppid=4321 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.322000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:17:14.323000 audit: BPF prog-id=192 op=LOAD Dec 16 03:17:14.323000 audit[4360]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcb18bde70 a2=94 a3=7ffcb18be050 items=0 ppid=4321 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.323000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:17:14.323000 audit: BPF prog-id=192 op=UNLOAD Dec 16 03:17:14.323000 audit[4360]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcb18bde70 a2=94 a3=7ffcb18be050 items=0 ppid=4321 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.323000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:17:14.331000 audit: BPF prog-id=193 op=LOAD Dec 16 03:17:14.331000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc05a41870 a2=98 a3=3 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.331000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.332000 audit: BPF prog-id=193 op=UNLOAD Dec 16 03:17:14.332000 audit[4374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc05a41840 a3=0 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.332000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.335000 audit: BPF prog-id=194 op=LOAD Dec 16 03:17:14.335000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc05a41660 a2=94 a3=54428f items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.335000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.335000 audit: BPF prog-id=194 op=UNLOAD Dec 16 03:17:14.335000 audit[4374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc05a41660 a2=94 a3=54428f items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.335000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.335000 audit: BPF prog-id=195 op=LOAD Dec 16 03:17:14.335000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc05a41690 a2=94 a3=2 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.335000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.335000 audit: BPF prog-id=195 op=UNLOAD Dec 16 03:17:14.335000 audit[4374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc05a41690 a2=0 a3=2 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.335000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.408711 containerd[1630]: time="2025-12-16T03:17:14.408531760Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:14.410420 containerd[1630]: time="2025-12-16T03:17:14.410360599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:17:14.411269 containerd[1630]: time="2025-12-16T03:17:14.410408935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:14.411816 kubelet[2816]: E1216 03:17:14.410792 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:17:14.411816 kubelet[2816]: E1216 03:17:14.410837 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:17:14.411816 kubelet[2816]: E1216 03:17:14.410985 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2vvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9b7f5fc68-6vh4x_calico-apiserver(f3b4d493-b815-435b-8539-393930301f5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:14.412904 kubelet[2816]: E1216 03:17:14.412833 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:17:14.502000 audit: BPF prog-id=196 op=LOAD Dec 16 03:17:14.502000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc05a41550 a2=94 a3=1 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.502000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.502000 audit: BPF prog-id=196 op=UNLOAD Dec 16 03:17:14.502000 audit[4374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc05a41550 a2=94 a3=1 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.502000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.513000 audit: BPF prog-id=197 op=LOAD Dec 16 03:17:14.513000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc05a41540 a2=94 a3=4 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.513000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.513000 audit: BPF prog-id=197 op=UNLOAD Dec 16 03:17:14.513000 audit[4374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc05a41540 a2=0 a3=4 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.513000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.513000 audit: BPF prog-id=198 op=LOAD Dec 16 03:17:14.513000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc05a413a0 a2=94 a3=5 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.513000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.513000 audit: BPF prog-id=198 op=UNLOAD Dec 16 03:17:14.513000 audit[4374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc05a413a0 a2=0 a3=5 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.513000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.513000 audit: BPF prog-id=199 op=LOAD Dec 16 03:17:14.513000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc05a415c0 a2=94 a3=6 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.513000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.513000 audit: BPF prog-id=199 op=UNLOAD Dec 16 03:17:14.513000 audit[4374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc05a415c0 a2=0 a3=6 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.513000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.514000 audit: BPF prog-id=200 op=LOAD Dec 16 03:17:14.514000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc05a40d70 a2=94 a3=88 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.514000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.514000 audit: BPF prog-id=201 op=LOAD Dec 16 03:17:14.514000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc05a40bf0 a2=94 a3=2 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.514000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.514000 audit: BPF prog-id=201 op=UNLOAD Dec 16 03:17:14.514000 audit[4374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc05a40c20 a2=0 a3=7ffc05a40d20 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.514000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.514000 audit: BPF prog-id=200 op=UNLOAD Dec 16 03:17:14.514000 audit[4374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=27a83d10 a2=0 a3=16a3e16ba85e6d27 items=0 ppid=4321 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.514000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:17:14.523000 audit: BPF prog-id=202 op=LOAD Dec 16 03:17:14.523000 audit[4381]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe36258720 a2=98 a3=1999999999999999 items=0 ppid=4321 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.523000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:17:14.523000 audit: BPF prog-id=202 op=UNLOAD Dec 16 03:17:14.523000 audit[4381]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe362586f0 a3=0 items=0 ppid=4321 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.523000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:17:14.524000 audit: BPF prog-id=203 op=LOAD Dec 16 03:17:14.524000 audit[4381]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe36258600 a2=94 a3=ffff items=0 ppid=4321 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.524000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:17:14.524000 audit: BPF prog-id=203 op=UNLOAD Dec 16 03:17:14.524000 audit[4381]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe36258600 a2=94 a3=ffff items=0 ppid=4321 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.524000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:17:14.524000 audit: BPF prog-id=204 op=LOAD Dec 16 03:17:14.524000 audit[4381]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe36258640 a2=94 a3=7ffe36258820 items=0 ppid=4321 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.524000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:17:14.524000 audit: BPF prog-id=204 op=UNLOAD Dec 16 03:17:14.524000 audit[4381]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe36258640 a2=94 a3=7ffe36258820 items=0 ppid=4321 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.524000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:17:14.593387 systemd-networkd[1541]: vxlan.calico: Link UP Dec 16 03:17:14.593396 systemd-networkd[1541]: vxlan.calico: Gained carrier Dec 16 03:17:14.615000 audit: BPF prog-id=205 op=LOAD Dec 16 03:17:14.615000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc36f6f260 a2=98 a3=0 items=0 ppid=4321 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.615000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:17:14.615000 audit: BPF prog-id=205 op=UNLOAD Dec 16 03:17:14.615000 audit[4408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc36f6f230 a3=0 items=0 ppid=4321 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.615000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:17:14.618000 audit: BPF prog-id=206 op=LOAD Dec 16 03:17:14.618000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc36f6f070 a2=94 a3=54428f items=0 ppid=4321 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.618000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:17:14.618000 audit: BPF prog-id=206 op=UNLOAD Dec 16 03:17:14.618000 audit[4408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc36f6f070 a2=94 a3=54428f items=0 ppid=4321 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.618000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:17:14.618000 audit: BPF prog-id=207 op=LOAD Dec 16 03:17:14.618000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc36f6f0a0 a2=94 a3=2 items=0 ppid=4321 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.618000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:17:14.618000 audit: BPF prog-id=207 op=UNLOAD Dec 16 03:17:14.618000 audit[4408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc36f6f0a0 a2=0 a3=2 items=0 ppid=4321 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.618000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:17:14.618000 audit: BPF prog-id=208 op=LOAD Dec 16 03:17:14.618000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc36f6ee50 a2=94 a3=4 items=0 ppid=4321 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.618000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:17:14.618000 audit: BPF prog-id=208 op=UNLOAD Dec 16 03:17:14.618000 audit[4408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc36f6ee50 a2=94 a3=4 items=0 ppid=4321 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.618000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:17:14.618000 audit: BPF prog-id=209 op=LOAD Dec 16 03:17:14.618000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc36f6ef50 a2=94 a3=7ffc36f6f0d0 items=0 ppid=4321 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.618000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:17:14.618000 audit: BPF prog-id=209 op=UNLOAD Dec 16 03:17:14.618000 audit[4408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc36f6ef50 a2=0 a3=7ffc36f6f0d0 items=0 ppid=4321 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.618000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:17:14.618000 audit: BPF prog-id=210 op=LOAD Dec 16 03:17:14.618000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc36f6e680 a2=94 a3=2 items=0 ppid=4321 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.618000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:17:14.618000 audit: BPF prog-id=210 op=UNLOAD Dec 16 03:17:14.618000 audit[4408]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc36f6e680 a2=0 a3=2 items=0 ppid=4321 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.618000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:17:14.618000 audit: BPF prog-id=211 op=LOAD Dec 16 03:17:14.618000 audit[4408]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc36f6e780 a2=94 a3=30 items=0 ppid=4321 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.618000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:17:14.629000 audit: BPF prog-id=212 op=LOAD Dec 16 03:17:14.629000 audit[4412]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff27f10560 a2=98 a3=0 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.629000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.629000 audit: BPF prog-id=212 op=UNLOAD Dec 16 03:17:14.629000 audit[4412]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff27f10530 a3=0 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.629000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.629000 audit: BPF prog-id=213 op=LOAD Dec 16 03:17:14.629000 audit[4412]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff27f10350 a2=94 a3=54428f items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.629000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.630000 audit: BPF prog-id=213 op=UNLOAD Dec 16 03:17:14.630000 audit[4412]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff27f10350 a2=94 a3=54428f items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.630000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.630000 audit: BPF prog-id=214 op=LOAD Dec 16 03:17:14.630000 audit[4412]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff27f10380 a2=94 a3=2 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.630000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.630000 audit: BPF prog-id=214 op=UNLOAD Dec 16 03:17:14.630000 audit[4412]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff27f10380 a2=0 a3=2 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.630000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.735966 containerd[1630]: time="2025-12-16T03:17:14.735915732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b7f5fc68-z5vrj,Uid:0981f349-361d-45e9-bda1-a29e4e4386d6,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:17:14.736278 containerd[1630]: time="2025-12-16T03:17:14.736253730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lgq5m,Uid:c4b190c6-e978-4cd1-9872-1d44369dd5d3,Namespace:kube-system,Attempt:0,}" Dec 16 03:17:14.738774 containerd[1630]: time="2025-12-16T03:17:14.738027801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dcc4656ff-vfbkw,Uid:2deef273-b182-480c-9527-049c0bd96660,Namespace:calico-system,Attempt:0,}" Dec 16 03:17:14.752079 systemd-networkd[1541]: cali31ba6ba7680: Gained IPv6LL Dec 16 03:17:14.821000 audit: BPF prog-id=215 op=LOAD Dec 16 03:17:14.821000 audit[4412]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff27f10240 a2=94 a3=1 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.821000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.822000 audit: BPF prog-id=215 op=UNLOAD Dec 16 03:17:14.822000 audit[4412]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff27f10240 a2=94 a3=1 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.822000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.850000 audit: BPF prog-id=216 op=LOAD Dec 16 03:17:14.850000 audit[4412]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff27f10230 a2=94 a3=4 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.850000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.850000 audit: BPF prog-id=216 op=UNLOAD Dec 16 03:17:14.850000 audit[4412]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff27f10230 a2=0 a3=4 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.850000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.853000 audit: BPF prog-id=217 op=LOAD Dec 16 03:17:14.853000 audit[4412]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff27f10090 a2=94 a3=5 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.853000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.854000 audit: BPF prog-id=217 op=UNLOAD Dec 16 03:17:14.854000 audit[4412]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff27f10090 a2=0 a3=5 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.854000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.854000 audit: BPF prog-id=218 op=LOAD Dec 16 03:17:14.854000 audit[4412]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff27f102b0 a2=94 a3=6 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.854000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.854000 audit: BPF prog-id=218 op=UNLOAD Dec 16 03:17:14.854000 audit[4412]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff27f102b0 a2=0 a3=6 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.854000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.855000 audit: BPF prog-id=219 op=LOAD Dec 16 03:17:14.855000 audit[4412]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff27f0fa60 a2=94 a3=88 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.855000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.855000 audit: BPF prog-id=220 op=LOAD Dec 16 03:17:14.855000 audit[4412]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff27f0f8e0 a2=94 a3=2 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.855000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.855000 audit: BPF prog-id=220 op=UNLOAD Dec 16 03:17:14.855000 audit[4412]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff27f0f910 a2=0 a3=7fff27f0fa10 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.855000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.855000 audit: BPF prog-id=219 op=UNLOAD Dec 16 03:17:14.855000 audit[4412]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=a9c1d10 a2=0 a3=9ded20a049819dc6 items=0 ppid=4321 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.855000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:17:14.880000 audit: BPF prog-id=211 op=UNLOAD Dec 16 03:17:14.880000 audit[4321]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000fc9400 a2=0 a3=0 items=0 ppid=3949 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:14.880000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 03:17:14.922459 systemd-networkd[1541]: calic0456f221d1: Link UP Dec 16 03:17:14.923260 systemd-networkd[1541]: calic0456f221d1: Gained carrier Dec 16 03:17:14.937231 containerd[1630]: 2025-12-16 03:17:14.820 [INFO][4423] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--6--1137cb7bd3-k8s-calico--kube--controllers--dcc4656ff--vfbkw-eth0 calico-kube-controllers-dcc4656ff- calico-system 2deef273-b182-480c-9527-049c0bd96660 835 0 2025-12-16 03:16:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:dcc4656ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547-0-0-6-1137cb7bd3 calico-kube-controllers-dcc4656ff-vfbkw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic0456f221d1 [] [] }} ContainerID="0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" Namespace="calico-system" Pod="calico-kube-controllers-dcc4656ff-vfbkw" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--kube--controllers--dcc4656ff--vfbkw-" Dec 16 03:17:14.937231 containerd[1630]: 2025-12-16 03:17:14.821 [INFO][4423] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" Namespace="calico-system" Pod="calico-kube-controllers-dcc4656ff-vfbkw" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--kube--controllers--dcc4656ff--vfbkw-eth0" Dec 16 03:17:14.937231 containerd[1630]: 2025-12-16 03:17:14.857 [INFO][4449] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" HandleID="k8s-pod-network.0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-calico--kube--controllers--dcc4656ff--vfbkw-eth0" Dec 16 03:17:14.937423 containerd[1630]: 2025-12-16 03:17:14.861 [INFO][4449] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" HandleID="k8s-pod-network.0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-calico--kube--controllers--dcc4656ff--vfbkw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-6-1137cb7bd3", "pod":"calico-kube-controllers-dcc4656ff-vfbkw", "timestamp":"2025-12-16 03:17:14.857047313 +0000 UTC"}, Hostname:"ci-4547-0-0-6-1137cb7bd3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:17:14.937423 containerd[1630]: 2025-12-16 03:17:14.861 [INFO][4449] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:17:14.937423 containerd[1630]: 2025-12-16 03:17:14.861 [INFO][4449] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:17:14.937423 containerd[1630]: 2025-12-16 03:17:14.861 [INFO][4449] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-6-1137cb7bd3' Dec 16 03:17:14.937423 containerd[1630]: 2025-12-16 03:17:14.873 [INFO][4449] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:14.937423 containerd[1630]: 2025-12-16 03:17:14.877 [INFO][4449] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:14.937423 containerd[1630]: 2025-12-16 03:17:14.886 [INFO][4449] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:14.937423 containerd[1630]: 2025-12-16 03:17:14.891 [INFO][4449] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:14.937423 containerd[1630]: 2025-12-16 03:17:14.897 [INFO][4449] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:14.938085 containerd[1630]: 2025-12-16 03:17:14.897 [INFO][4449] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:14.938085 containerd[1630]: 2025-12-16 03:17:14.901 [INFO][4449] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748 Dec 16 03:17:14.938085 containerd[1630]: 2025-12-16 03:17:14.906 [INFO][4449] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:14.938085 containerd[1630]: 2025-12-16 03:17:14.914 [INFO][4449] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.68/26] block=192.168.67.64/26 handle="k8s-pod-network.0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:14.938085 containerd[1630]: 2025-12-16 03:17:14.914 [INFO][4449] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.68/26] handle="k8s-pod-network.0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:14.938085 containerd[1630]: 2025-12-16 03:17:14.915 [INFO][4449] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:17:14.938085 containerd[1630]: 2025-12-16 03:17:14.915 [INFO][4449] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.68/26] IPv6=[] ContainerID="0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" HandleID="k8s-pod-network.0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-calico--kube--controllers--dcc4656ff--vfbkw-eth0" Dec 16 03:17:14.938203 containerd[1630]: 2025-12-16 03:17:14.917 [INFO][4423] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" Namespace="calico-system" Pod="calico-kube-controllers-dcc4656ff-vfbkw" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--kube--controllers--dcc4656ff--vfbkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-calico--kube--controllers--dcc4656ff--vfbkw-eth0", GenerateName:"calico-kube-controllers-dcc4656ff-", Namespace:"calico-system", SelfLink:"", UID:"2deef273-b182-480c-9527-049c0bd96660", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 16, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dcc4656ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"", Pod:"calico-kube-controllers-dcc4656ff-vfbkw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic0456f221d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:14.938256 containerd[1630]: 2025-12-16 03:17:14.918 [INFO][4423] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.68/32] ContainerID="0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" Namespace="calico-system" Pod="calico-kube-controllers-dcc4656ff-vfbkw" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--kube--controllers--dcc4656ff--vfbkw-eth0" Dec 16 03:17:14.938256 containerd[1630]: 2025-12-16 03:17:14.918 [INFO][4423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0456f221d1 ContainerID="0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" Namespace="calico-system" Pod="calico-kube-controllers-dcc4656ff-vfbkw" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--kube--controllers--dcc4656ff--vfbkw-eth0" Dec 16 03:17:14.938256 containerd[1630]: 2025-12-16 03:17:14.923 [INFO][4423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" Namespace="calico-system" Pod="calico-kube-controllers-dcc4656ff-vfbkw" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--kube--controllers--dcc4656ff--vfbkw-eth0" Dec 16 03:17:14.938310 containerd[1630]: 2025-12-16 03:17:14.924 [INFO][4423] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" Namespace="calico-system" Pod="calico-kube-controllers-dcc4656ff-vfbkw" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--kube--controllers--dcc4656ff--vfbkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-calico--kube--controllers--dcc4656ff--vfbkw-eth0", GenerateName:"calico-kube-controllers-dcc4656ff-", Namespace:"calico-system", SelfLink:"", UID:"2deef273-b182-480c-9527-049c0bd96660", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 16, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dcc4656ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748", Pod:"calico-kube-controllers-dcc4656ff-vfbkw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic0456f221d1", MAC:"4a:a3:e9:cd:85:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:14.938355 containerd[1630]: 2025-12-16 03:17:14.933 [INFO][4423] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" Namespace="calico-system" Pod="calico-kube-controllers-dcc4656ff-vfbkw" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--kube--controllers--dcc4656ff--vfbkw-eth0" Dec 16 03:17:14.949736 kubelet[2816]: E1216 03:17:14.949709 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:17:14.951314 kubelet[2816]: E1216 03:17:14.951266 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:17:14.978429 containerd[1630]: time="2025-12-16T03:17:14.978297690Z" level=info msg="connecting to shim 0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748" address="unix:///run/containerd/s/4559563224386ca23197ae13bb7426d32d7e20f0054ec6c983f023261761e765" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:17:15.034906 systemd[1]: Started cri-containerd-0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748.scope - libcontainer container 0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748. Dec 16 03:17:15.043291 kernel: kauditd_printk_skb: 269 callbacks suppressed Dec 16 03:17:15.043479 kernel: audit: type=1325 audit(1765855035.036:671): table=filter:121 family=2 entries=20 op=nft_register_rule pid=4538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:15.052167 kernel: audit: type=1300 audit(1765855035.036:671): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc157c1050 a2=0 a3=7ffc157c103c items=0 ppid=2942 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.036000 audit[4538]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:15.036000 audit[4538]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc157c1050 a2=0 a3=7ffc157c103c items=0 ppid=2942 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.036000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:15.061591 kernel: audit: type=1327 audit(1765855035.036:671): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:15.061677 kernel: audit: type=1325 audit(1765855035.044:672): table=nat:122 family=2 entries=14 op=nft_register_rule pid=4538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:15.044000 audit[4538]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:15.070012 kernel: audit: type=1300 audit(1765855035.044:672): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc157c1050 a2=0 a3=0 items=0 ppid=2942 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.044000 audit[4538]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc157c1050 a2=0 a3=0 items=0 ppid=2942 pid=4538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.044000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:15.075797 kernel: audit: type=1327 audit(1765855035.044:672): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:15.085786 kernel: audit: type=1325 audit(1765855035.080:673): table=mangle:123 family=2 entries=16 op=nft_register_chain pid=4530 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:17:15.080000 audit[4530]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=4530 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:17:15.094324 kernel: audit: type=1300 audit(1765855035.080:673): arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffebe82d5a0 a2=0 a3=7ffebe82d58c items=0 ppid=4321 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.080000 audit[4530]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffebe82d5a0 a2=0 a3=7ffebe82d58c items=0 ppid=4321 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.101073 kernel: audit: type=1327 audit(1765855035.080:673): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:17:15.080000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:17:15.102000 audit: BPF prog-id=221 op=LOAD Dec 16 03:17:15.107901 kernel: audit: type=1334 audit(1765855035.102:674): prog-id=221 op=LOAD Dec 16 03:17:15.104000 audit: BPF prog-id=222 op=LOAD Dec 16 03:17:15.104000 audit[4518]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4505 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066363635383565656339313731626630666464346136356239303336 Dec 16 03:17:15.105000 audit: BPF prog-id=222 op=UNLOAD Dec 16 03:17:15.105000 audit[4518]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4505 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066363635383565656339313731626630666464346136356239303336 Dec 16 03:17:15.106000 audit: BPF prog-id=223 op=LOAD Dec 16 03:17:15.107008 systemd-networkd[1541]: cali2e8d64386c6: Link UP Dec 16 03:17:15.108405 systemd-networkd[1541]: cali2e8d64386c6: Gained carrier Dec 16 03:17:15.106000 audit[4518]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4505 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066363635383565656339313731626630666464346136356239303336 Dec 16 03:17:15.107000 audit: BPF prog-id=224 op=LOAD Dec 16 03:17:15.107000 audit[4518]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4505 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066363635383565656339313731626630666464346136356239303336 Dec 16 03:17:15.107000 audit: BPF prog-id=224 op=UNLOAD Dec 16 03:17:15.107000 audit[4518]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4505 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066363635383565656339313731626630666464346136356239303336 Dec 16 03:17:15.107000 audit: BPF prog-id=223 op=UNLOAD Dec 16 03:17:15.107000 audit[4518]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4505 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066363635383565656339313731626630666464346136356239303336 Dec 16 03:17:15.107000 audit: BPF prog-id=225 op=LOAD Dec 16 03:17:15.107000 audit[4518]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4505 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066363635383565656339313731626630666464346136356239303336 Dec 16 03:17:15.121000 audit[4553]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=4553 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:17:15.122000 audit[4532]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4532 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:17:15.122000 audit[4532]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd47034730 a2=0 a3=7ffd4703471c items=0 ppid=4321 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.122000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:17:15.121000 audit[4553]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffca8d96b70 a2=0 a3=7ffca8d96b5c items=0 ppid=4321 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.121000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:17:15.131821 containerd[1630]: 2025-12-16 03:17:14.840 [INFO][4413] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--z5vrj-eth0 calico-apiserver-9b7f5fc68- calico-apiserver 0981f349-361d-45e9-bda1-a29e4e4386d6 833 0 2025-12-16 03:16:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9b7f5fc68 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-6-1137cb7bd3 calico-apiserver-9b7f5fc68-z5vrj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2e8d64386c6 [] [] }} ContainerID="aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-z5vrj" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--z5vrj-" Dec 16 03:17:15.131821 containerd[1630]: 2025-12-16 03:17:14.840 [INFO][4413] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-z5vrj" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--z5vrj-eth0" Dec 16 03:17:15.131821 containerd[1630]: 2025-12-16 03:17:14.907 [INFO][4457] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" HandleID="k8s-pod-network.aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--z5vrj-eth0" Dec 16 03:17:15.131997 containerd[1630]: 2025-12-16 03:17:14.907 [INFO][4457] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" HandleID="k8s-pod-network.aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--z5vrj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000349710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-6-1137cb7bd3", "pod":"calico-apiserver-9b7f5fc68-z5vrj", "timestamp":"2025-12-16 03:17:14.906338196 +0000 UTC"}, Hostname:"ci-4547-0-0-6-1137cb7bd3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:17:15.131997 containerd[1630]: 2025-12-16 03:17:14.907 [INFO][4457] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:17:15.131997 containerd[1630]: 2025-12-16 03:17:14.915 [INFO][4457] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:17:15.131997 containerd[1630]: 2025-12-16 03:17:14.915 [INFO][4457] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-6-1137cb7bd3' Dec 16 03:17:15.131997 containerd[1630]: 2025-12-16 03:17:14.978 [INFO][4457] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.131997 containerd[1630]: 2025-12-16 03:17:14.996 [INFO][4457] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.131997 containerd[1630]: 2025-12-16 03:17:15.017 [INFO][4457] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.131997 containerd[1630]: 2025-12-16 03:17:15.028 [INFO][4457] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.131997 containerd[1630]: 2025-12-16 03:17:15.033 [INFO][4457] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.132167 containerd[1630]: 2025-12-16 03:17:15.033 [INFO][4457] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.132167 containerd[1630]: 2025-12-16 03:17:15.037 [INFO][4457] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0 Dec 16 03:17:15.132167 containerd[1630]: 2025-12-16 03:17:15.053 [INFO][4457] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.132167 containerd[1630]: 2025-12-16 03:17:15.059 [INFO][4457] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.69/26] block=192.168.67.64/26 handle="k8s-pod-network.aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.132167 containerd[1630]: 2025-12-16 03:17:15.059 [INFO][4457] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.69/26] handle="k8s-pod-network.aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.132167 containerd[1630]: 2025-12-16 03:17:15.061 [INFO][4457] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:17:15.132167 containerd[1630]: 2025-12-16 03:17:15.061 [INFO][4457] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.69/26] IPv6=[] ContainerID="aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" HandleID="k8s-pod-network.aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--z5vrj-eth0" Dec 16 03:17:15.132276 containerd[1630]: 2025-12-16 03:17:15.068 [INFO][4413] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-z5vrj" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--z5vrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--z5vrj-eth0", GenerateName:"calico-apiserver-9b7f5fc68-", Namespace:"calico-apiserver", SelfLink:"", UID:"0981f349-361d-45e9-bda1-a29e4e4386d6", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b7f5fc68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"", Pod:"calico-apiserver-9b7f5fc68-z5vrj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e8d64386c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:15.132322 containerd[1630]: 2025-12-16 03:17:15.068 [INFO][4413] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.69/32] ContainerID="aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-z5vrj" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--z5vrj-eth0" Dec 16 03:17:15.132322 containerd[1630]: 2025-12-16 03:17:15.068 [INFO][4413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e8d64386c6 ContainerID="aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-z5vrj" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--z5vrj-eth0" Dec 16 03:17:15.132322 containerd[1630]: 2025-12-16 03:17:15.109 [INFO][4413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-z5vrj" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--z5vrj-eth0" Dec 16 03:17:15.132374 containerd[1630]: 2025-12-16 03:17:15.110 [INFO][4413] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-z5vrj" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--z5vrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--z5vrj-eth0", GenerateName:"calico-apiserver-9b7f5fc68-", Namespace:"calico-apiserver", SelfLink:"", UID:"0981f349-361d-45e9-bda1-a29e4e4386d6", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 16, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b7f5fc68", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0", Pod:"calico-apiserver-9b7f5fc68-z5vrj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e8d64386c6", MAC:"4e:9a:37:6f:ac:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:15.132416 containerd[1630]: 2025-12-16 03:17:15.129 [INFO][4413] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" Namespace="calico-apiserver" Pod="calico-apiserver-9b7f5fc68-z5vrj" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-calico--apiserver--9b7f5fc68--z5vrj-eth0" Dec 16 03:17:15.164909 containerd[1630]: time="2025-12-16T03:17:15.164869103Z" level=info msg="connecting to shim aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0" address="unix:///run/containerd/s/b3360cba14e09b44199b2eccec5a87a029110f8c2b70626022c0fcbc00d985f3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:17:15.141000 audit[4555]: NETFILTER_CFG table=filter:126 family=2 entries=164 op=nft_register_chain pid=4555 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:17:15.141000 audit[4555]: SYSCALL arch=c000003e syscall=46 success=yes exit=95100 a0=3 a1=7ffecffa7b30 a2=0 a3=7ffecffa7b1c items=0 ppid=4321 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.141000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:17:15.178542 systemd-networkd[1541]: cali8c5cb4c2254: Link UP Dec 16 03:17:15.179964 systemd-networkd[1541]: cali8c5cb4c2254: Gained carrier Dec 16 03:17:15.218074 containerd[1630]: 2025-12-16 03:17:14.861 [INFO][4429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--lgq5m-eth0 coredns-668d6bf9bc- kube-system c4b190c6-e978-4cd1-9872-1d44369dd5d3 837 0 2025-12-16 03:16:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-6-1137cb7bd3 coredns-668d6bf9bc-lgq5m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8c5cb4c2254 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgq5m" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--lgq5m-" Dec 16 03:17:15.218074 containerd[1630]: 2025-12-16 03:17:14.861 [INFO][4429] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgq5m" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--lgq5m-eth0" Dec 16 03:17:15.218074 containerd[1630]: 2025-12-16 03:17:14.909 [INFO][4466] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" HandleID="k8s-pod-network.595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--lgq5m-eth0" Dec 16 03:17:15.218270 containerd[1630]: 2025-12-16 03:17:14.909 [INFO][4466] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" HandleID="k8s-pod-network.595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--lgq5m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-6-1137cb7bd3", "pod":"coredns-668d6bf9bc-lgq5m", "timestamp":"2025-12-16 03:17:14.909388973 +0000 UTC"}, Hostname:"ci-4547-0-0-6-1137cb7bd3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:17:15.218270 containerd[1630]: 2025-12-16 03:17:14.909 [INFO][4466] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:17:15.218270 containerd[1630]: 2025-12-16 03:17:15.060 [INFO][4466] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:17:15.218270 containerd[1630]: 2025-12-16 03:17:15.060 [INFO][4466] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-6-1137cb7bd3' Dec 16 03:17:15.218270 containerd[1630]: 2025-12-16 03:17:15.078 [INFO][4466] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.218270 containerd[1630]: 2025-12-16 03:17:15.098 [INFO][4466] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.218270 containerd[1630]: 2025-12-16 03:17:15.113 [INFO][4466] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.218270 containerd[1630]: 2025-12-16 03:17:15.116 [INFO][4466] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.218270 containerd[1630]: 2025-12-16 03:17:15.123 [INFO][4466] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.218432 containerd[1630]: 2025-12-16 03:17:15.123 [INFO][4466] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.218432 containerd[1630]: 2025-12-16 03:17:15.126 [INFO][4466] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca Dec 16 03:17:15.218432 containerd[1630]: 2025-12-16 03:17:15.135 [INFO][4466] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.218432 containerd[1630]: 2025-12-16 03:17:15.146 [INFO][4466] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.70/26] block=192.168.67.64/26 handle="k8s-pod-network.595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.218432 containerd[1630]: 2025-12-16 03:17:15.147 [INFO][4466] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.70/26] handle="k8s-pod-network.595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.218432 containerd[1630]: 2025-12-16 03:17:15.147 [INFO][4466] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:17:15.218432 containerd[1630]: 2025-12-16 03:17:15.147 [INFO][4466] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.70/26] IPv6=[] ContainerID="595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" HandleID="k8s-pod-network.595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--lgq5m-eth0" Dec 16 03:17:15.218546 containerd[1630]: 2025-12-16 03:17:15.168 [INFO][4429] cni-plugin/k8s.go 418: Populated endpoint ContainerID="595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgq5m" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--lgq5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--lgq5m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c4b190c6-e978-4cd1-9872-1d44369dd5d3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 16, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"", Pod:"coredns-668d6bf9bc-lgq5m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c5cb4c2254", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:15.218546 containerd[1630]: 2025-12-16 03:17:15.168 [INFO][4429] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.70/32] ContainerID="595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgq5m" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--lgq5m-eth0" Dec 16 03:17:15.218546 containerd[1630]: 2025-12-16 03:17:15.168 [INFO][4429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c5cb4c2254 ContainerID="595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgq5m" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--lgq5m-eth0" Dec 16 03:17:15.218546 containerd[1630]: 2025-12-16 03:17:15.179 [INFO][4429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgq5m" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--lgq5m-eth0" Dec 16 03:17:15.218546 containerd[1630]: 2025-12-16 03:17:15.180 [INFO][4429] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgq5m" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--lgq5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--lgq5m-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c4b190c6-e978-4cd1-9872-1d44369dd5d3", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 16, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca", Pod:"coredns-668d6bf9bc-lgq5m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c5cb4c2254", MAC:"c2:5f:02:aa:da:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:15.218546 containerd[1630]: 2025-12-16 03:17:15.204 [INFO][4429] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgq5m" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--lgq5m-eth0" Dec 16 03:17:15.228609 containerd[1630]: time="2025-12-16T03:17:15.228546413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dcc4656ff-vfbkw,Uid:2deef273-b182-480c-9527-049c0bd96660,Namespace:calico-system,Attempt:0,} returns sandbox id \"0f66585eec9171bf0fdd4a65b903632ceab05580974bbddc400d4422d8455748\"" Dec 16 03:17:15.232259 containerd[1630]: time="2025-12-16T03:17:15.232107240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:17:15.240192 systemd[1]: Started cri-containerd-aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0.scope - libcontainer container aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0. Dec 16 03:17:15.254000 audit[4620]: NETFILTER_CFG table=filter:127 family=2 entries=107 op=nft_register_chain pid=4620 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:17:15.254000 audit[4620]: SYSCALL arch=c000003e syscall=46 success=yes exit=60592 a0=3 a1=7fff3eac6800 a2=0 a3=7fff3eac67ec items=0 ppid=4321 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.254000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:17:15.258312 containerd[1630]: time="2025-12-16T03:17:15.257925732Z" level=info msg="connecting to shim 595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca" address="unix:///run/containerd/s/1328efd72eccba39b58ef77808fcf53775c1de20d185381f9671b3bbf1efe707" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:17:15.261000 audit: BPF prog-id=226 op=LOAD Dec 16 03:17:15.261000 audit: BPF prog-id=227 op=LOAD Dec 16 03:17:15.261000 audit[4594]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4574 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165646361373465313633613764396663343132643734336566313039 Dec 16 03:17:15.262000 audit: BPF prog-id=227 op=UNLOAD Dec 16 03:17:15.262000 audit[4594]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4574 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165646361373465313633613764396663343132643734336566313039 Dec 16 03:17:15.262000 audit: BPF prog-id=228 op=LOAD Dec 16 03:17:15.262000 audit[4594]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4574 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165646361373465313633613764396663343132643734336566313039 Dec 16 03:17:15.262000 audit: BPF prog-id=229 op=LOAD Dec 16 03:17:15.262000 audit[4594]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4574 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165646361373465313633613764396663343132643734336566313039 Dec 16 03:17:15.262000 audit: BPF prog-id=229 op=UNLOAD Dec 16 03:17:15.262000 audit[4594]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4574 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165646361373465313633613764396663343132643734336566313039 Dec 16 03:17:15.262000 audit: BPF prog-id=228 op=UNLOAD Dec 16 03:17:15.262000 audit[4594]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4574 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165646361373465313633613764396663343132643734336566313039 Dec 16 03:17:15.262000 audit: BPF prog-id=230 op=LOAD Dec 16 03:17:15.262000 audit[4594]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4574 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165646361373465313633613764396663343132643734336566313039 Dec 16 03:17:15.285087 systemd[1]: Started cri-containerd-595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca.scope - libcontainer container 595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca. Dec 16 03:17:15.296000 audit: BPF prog-id=231 op=LOAD Dec 16 03:17:15.296000 audit: BPF prog-id=232 op=LOAD Dec 16 03:17:15.296000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4629 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539356531363836613966356161353834323865333563663631333837 Dec 16 03:17:15.296000 audit: BPF prog-id=232 op=UNLOAD Dec 16 03:17:15.296000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4629 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539356531363836613966356161353834323865333563663631333837 Dec 16 03:17:15.297000 audit: BPF prog-id=233 op=LOAD Dec 16 03:17:15.297000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4629 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539356531363836613966356161353834323865333563663631333837 Dec 16 03:17:15.297000 audit: BPF prog-id=234 op=LOAD Dec 16 03:17:15.297000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4629 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539356531363836613966356161353834323865333563663631333837 Dec 16 03:17:15.297000 audit: BPF prog-id=234 op=UNLOAD Dec 16 03:17:15.297000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4629 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539356531363836613966356161353834323865333563663631333837 Dec 16 03:17:15.297000 audit: BPF prog-id=233 op=UNLOAD Dec 16 03:17:15.297000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4629 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539356531363836613966356161353834323865333563663631333837 Dec 16 03:17:15.297000 audit: BPF prog-id=235 op=LOAD Dec 16 03:17:15.297000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4629 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539356531363836613966356161353834323865333563663631333837 Dec 16 03:17:15.321067 containerd[1630]: time="2025-12-16T03:17:15.321009948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b7f5fc68-z5vrj,Uid:0981f349-361d-45e9-bda1-a29e4e4386d6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aedca74e163a7d9fc412d743ef109eaf1c08b9f26413bb04931ef2652c6b57a0\"" Dec 16 03:17:15.340006 containerd[1630]: time="2025-12-16T03:17:15.339887104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lgq5m,Uid:c4b190c6-e978-4cd1-9872-1d44369dd5d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca\"" Dec 16 03:17:15.383343 containerd[1630]: time="2025-12-16T03:17:15.383093569Z" level=info msg="CreateContainer within sandbox \"595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:17:15.397897 containerd[1630]: time="2025-12-16T03:17:15.397861914Z" level=info msg="Container 71d66885996b072ef03a3a2c9523d35898eb3c25847572d75b450f769de7213c: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:17:15.404320 containerd[1630]: time="2025-12-16T03:17:15.404286290Z" level=info msg="CreateContainer within sandbox \"595e1686a9f5aa58428e35cf613872185a683ad29f6d9f328efbd0742661ddca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71d66885996b072ef03a3a2c9523d35898eb3c25847572d75b450f769de7213c\"" Dec 16 03:17:15.404996 containerd[1630]: time="2025-12-16T03:17:15.404866486Z" level=info msg="StartContainer for \"71d66885996b072ef03a3a2c9523d35898eb3c25847572d75b450f769de7213c\"" Dec 16 03:17:15.405996 containerd[1630]: time="2025-12-16T03:17:15.405975176Z" level=info msg="connecting to shim 71d66885996b072ef03a3a2c9523d35898eb3c25847572d75b450f769de7213c" address="unix:///run/containerd/s/1328efd72eccba39b58ef77808fcf53775c1de20d185381f9671b3bbf1efe707" protocol=ttrpc version=3 Dec 16 03:17:15.426922 systemd[1]: Started cri-containerd-71d66885996b072ef03a3a2c9523d35898eb3c25847572d75b450f769de7213c.scope - libcontainer container 71d66885996b072ef03a3a2c9523d35898eb3c25847572d75b450f769de7213c. Dec 16 03:17:15.447000 audit: BPF prog-id=236 op=LOAD Dec 16 03:17:15.448000 audit: BPF prog-id=237 op=LOAD Dec 16 03:17:15.448000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa238 a2=98 a3=0 items=0 ppid=4629 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731643636383835393936623037326566303361336132633935323364 Dec 16 03:17:15.448000 audit: BPF prog-id=237 op=UNLOAD Dec 16 03:17:15.448000 audit[4672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4629 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731643636383835393936623037326566303361336132633935323364 Dec 16 03:17:15.448000 audit: BPF prog-id=238 op=LOAD Dec 16 03:17:15.448000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa488 a2=98 a3=0 items=0 ppid=4629 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731643636383835393936623037326566303361336132633935323364 Dec 16 03:17:15.448000 audit: BPF prog-id=239 op=LOAD Dec 16 03:17:15.448000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001fa218 a2=98 a3=0 items=0 ppid=4629 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731643636383835393936623037326566303361336132633935323364 Dec 16 03:17:15.448000 audit: BPF prog-id=239 op=UNLOAD Dec 16 03:17:15.448000 audit[4672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4629 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731643636383835393936623037326566303361336132633935323364 Dec 16 03:17:15.448000 audit: BPF prog-id=238 op=UNLOAD Dec 16 03:17:15.448000 audit[4672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4629 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731643636383835393936623037326566303361336132633935323364 Dec 16 03:17:15.448000 audit: BPF prog-id=240 op=LOAD Dec 16 03:17:15.448000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001fa6e8 a2=98 a3=0 items=0 ppid=4629 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731643636383835393936623037326566303361336132633935323364 Dec 16 03:17:15.465871 containerd[1630]: time="2025-12-16T03:17:15.465798389Z" level=info msg="StartContainer for \"71d66885996b072ef03a3a2c9523d35898eb3c25847572d75b450f769de7213c\" returns successfully" Dec 16 03:17:15.679848 containerd[1630]: time="2025-12-16T03:17:15.679678088Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:15.681343 containerd[1630]: time="2025-12-16T03:17:15.681170878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:17:15.681343 containerd[1630]: time="2025-12-16T03:17:15.681300223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:15.681780 kubelet[2816]: E1216 03:17:15.681695 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:17:15.682373 kubelet[2816]: E1216 03:17:15.681842 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:17:15.682679 containerd[1630]: time="2025-12-16T03:17:15.682202887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:17:15.683381 kubelet[2816]: E1216 03:17:15.683293 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c989,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-dcc4656ff-vfbkw_calico-system(2deef273-b182-480c-9527-049c0bd96660): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:15.684531 kubelet[2816]: E1216 03:17:15.684489 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:17:15.735395 containerd[1630]: time="2025-12-16T03:17:15.735338083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j4ql6,Uid:71ed773f-2e6f-481b-a8da-215c519a3532,Namespace:kube-system,Attempt:0,}" Dec 16 03:17:15.840251 systemd-networkd[1541]: cali6f3863f35a2: Gained IPv6LL Dec 16 03:17:15.900309 systemd-networkd[1541]: calica4259ef924: Link UP Dec 16 03:17:15.901695 systemd-networkd[1541]: calica4259ef924: Gained carrier Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.798 [INFO][4705] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--j4ql6-eth0 coredns-668d6bf9bc- kube-system 71ed773f-2e6f-481b-a8da-215c519a3532 838 0 2025-12-16 03:16:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-6-1137cb7bd3 coredns-668d6bf9bc-j4ql6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calica4259ef924 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-j4ql6" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--j4ql6-" Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.798 [INFO][4705] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-j4ql6" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--j4ql6-eth0" Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.843 [INFO][4717] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" HandleID="k8s-pod-network.a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--j4ql6-eth0" Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.843 [INFO][4717] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" HandleID="k8s-pod-network.a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--j4ql6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-6-1137cb7bd3", "pod":"coredns-668d6bf9bc-j4ql6", "timestamp":"2025-12-16 03:17:15.84355939 +0000 UTC"}, Hostname:"ci-4547-0-0-6-1137cb7bd3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.844 [INFO][4717] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.844 [INFO][4717] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.844 [INFO][4717] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-6-1137cb7bd3' Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.852 [INFO][4717] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.860 [INFO][4717] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.867 [INFO][4717] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.869 [INFO][4717] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.872 [INFO][4717] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.873 [INFO][4717] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.878 [INFO][4717] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1 Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.881 [INFO][4717] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.888 [INFO][4717] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.71/26] block=192.168.67.64/26 handle="k8s-pod-network.a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.888 [INFO][4717] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.71/26] handle="k8s-pod-network.a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.888 [INFO][4717] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:17:15.917901 containerd[1630]: 2025-12-16 03:17:15.888 [INFO][4717] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.71/26] IPv6=[] ContainerID="a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" HandleID="k8s-pod-network.a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--j4ql6-eth0" Dec 16 03:17:15.920475 containerd[1630]: 2025-12-16 03:17:15.893 [INFO][4705] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-j4ql6" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--j4ql6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--j4ql6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"71ed773f-2e6f-481b-a8da-215c519a3532", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 16, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"", Pod:"coredns-668d6bf9bc-j4ql6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica4259ef924", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:15.920475 containerd[1630]: 2025-12-16 03:17:15.893 [INFO][4705] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.71/32] ContainerID="a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-j4ql6" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--j4ql6-eth0" Dec 16 03:17:15.920475 containerd[1630]: 2025-12-16 03:17:15.893 [INFO][4705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica4259ef924 ContainerID="a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-j4ql6" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--j4ql6-eth0" Dec 16 03:17:15.920475 containerd[1630]: 2025-12-16 03:17:15.902 [INFO][4705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-j4ql6" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--j4ql6-eth0" Dec 16 03:17:15.920475 containerd[1630]: 2025-12-16 03:17:15.904 [INFO][4705] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-j4ql6" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--j4ql6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--j4ql6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"71ed773f-2e6f-481b-a8da-215c519a3532", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 16, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1", Pod:"coredns-668d6bf9bc-j4ql6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica4259ef924", MAC:"42:90:f8:a4:92:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:15.920475 containerd[1630]: 2025-12-16 03:17:15.914 [INFO][4705] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-j4ql6" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-coredns--668d6bf9bc--j4ql6-eth0" Dec 16 03:17:15.948571 containerd[1630]: time="2025-12-16T03:17:15.948135243Z" level=info msg="connecting to shim a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1" address="unix:///run/containerd/s/0d050c765493d4d12a4e83ecb63c6332b9d748f01e4b6fafb4f958707f5f1be3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:17:15.960000 audit[4755]: NETFILTER_CFG table=filter:128 family=2 entries=58 op=nft_register_chain pid=4755 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:17:15.960000 audit[4755]: SYSCALL arch=c000003e syscall=46 success=yes exit=26760 a0=3 a1=7ffc54a13b00 a2=0 a3=7ffc54a13aec items=0 ppid=4321 pid=4755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:15.960000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:17:15.986266 kubelet[2816]: E1216 03:17:15.986228 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:17:15.986968 kubelet[2816]: E1216 03:17:15.986482 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:17:15.993125 systemd[1]: Started cri-containerd-a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1.scope - libcontainer container a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1. Dec 16 03:17:16.015000 audit: BPF prog-id=241 op=LOAD Dec 16 03:17:16.016000 audit: BPF prog-id=242 op=LOAD Dec 16 03:17:16.016000 audit[4754]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4742 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353765373461396130303039623638393737393662313930623066 Dec 16 03:17:16.016000 audit: BPF prog-id=242 op=UNLOAD Dec 16 03:17:16.016000 audit[4754]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4742 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353765373461396130303039623638393737393662313930623066 Dec 16 03:17:16.016000 audit: BPF prog-id=243 op=LOAD Dec 16 03:17:16.016000 audit[4754]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4742 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353765373461396130303039623638393737393662313930623066 Dec 16 03:17:16.016000 audit: BPF prog-id=244 op=LOAD Dec 16 03:17:16.016000 audit[4754]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4742 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353765373461396130303039623638393737393662313930623066 Dec 16 03:17:16.016000 audit: BPF prog-id=244 op=UNLOAD Dec 16 03:17:16.016000 audit[4754]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4742 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353765373461396130303039623638393737393662313930623066 Dec 16 03:17:16.016000 audit: BPF prog-id=243 op=UNLOAD Dec 16 03:17:16.016000 audit[4754]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4742 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353765373461396130303039623638393737393662313930623066 Dec 16 03:17:16.016000 audit: BPF prog-id=245 op=LOAD Dec 16 03:17:16.016000 audit[4754]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4742 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353765373461396130303039623638393737393662313930623066 Dec 16 03:17:16.026823 kubelet[2816]: I1216 03:17:16.021131 2816 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lgq5m" podStartSLOduration=43.021107829 podStartE2EDuration="43.021107829s" podCreationTimestamp="2025-12-16 03:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:17:15.999554687 +0000 UTC m=+49.368445396" watchObservedRunningTime="2025-12-16 03:17:16.021107829 +0000 UTC m=+49.389998537" Dec 16 03:17:16.043000 audit[4775]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4775 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:16.043000 audit[4775]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd216a3200 a2=0 a3=7ffd216a31ec items=0 ppid=2942 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:16.057000 audit[4775]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4775 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:16.057000 audit[4775]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd216a3200 a2=0 a3=0 items=0 ppid=2942 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.057000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:16.079116 containerd[1630]: time="2025-12-16T03:17:16.079079824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j4ql6,Uid:71ed773f-2e6f-481b-a8da-215c519a3532,Namespace:kube-system,Attempt:0,} returns sandbox id \"a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1\"" Dec 16 03:17:16.082898 containerd[1630]: time="2025-12-16T03:17:16.082852891Z" level=info msg="CreateContainer within sandbox \"a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:17:16.096359 containerd[1630]: time="2025-12-16T03:17:16.095711779Z" level=info msg="Container b2399620bada0acb5a6887f9b8b543cc75898b8ea82dacb1b4cad6e48d472fb5: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:17:16.096745 systemd-networkd[1541]: vxlan.calico: Gained IPv6LL Dec 16 03:17:16.110489 containerd[1630]: time="2025-12-16T03:17:16.110312786Z" level=info msg="CreateContainer within sandbox \"a057e74a9a0009b6897796b190b0f9ef3e0c320b3dc88e6248741e6864c5f7f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2399620bada0acb5a6887f9b8b543cc75898b8ea82dacb1b4cad6e48d472fb5\"" Dec 16 03:17:16.112799 containerd[1630]: time="2025-12-16T03:17:16.111373550Z" level=info msg="StartContainer for \"b2399620bada0acb5a6887f9b8b543cc75898b8ea82dacb1b4cad6e48d472fb5\"" Dec 16 03:17:16.113558 containerd[1630]: time="2025-12-16T03:17:16.113297246Z" level=info msg="connecting to shim b2399620bada0acb5a6887f9b8b543cc75898b8ea82dacb1b4cad6e48d472fb5" address="unix:///run/containerd/s/0d050c765493d4d12a4e83ecb63c6332b9d748f01e4b6fafb4f958707f5f1be3" protocol=ttrpc version=3 Dec 16 03:17:16.127714 containerd[1630]: time="2025-12-16T03:17:16.127684743Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:16.129125 containerd[1630]: time="2025-12-16T03:17:16.129098953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:16.129205 containerd[1630]: time="2025-12-16T03:17:16.129108893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:17:16.129314 kubelet[2816]: E1216 03:17:16.129270 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:17:16.129367 kubelet[2816]: E1216 03:17:16.129323 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:17:16.129467 kubelet[2816]: E1216 03:17:16.129422 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsgrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9b7f5fc68-z5vrj_calico-apiserver(0981f349-361d-45e9-bda1-a29e4e4386d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:16.131277 kubelet[2816]: E1216 03:17:16.131253 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:17:16.139931 systemd[1]: Started cri-containerd-b2399620bada0acb5a6887f9b8b543cc75898b8ea82dacb1b4cad6e48d472fb5.scope - libcontainer container b2399620bada0acb5a6887f9b8b543cc75898b8ea82dacb1b4cad6e48d472fb5. Dec 16 03:17:16.150000 audit: BPF prog-id=246 op=LOAD Dec 16 03:17:16.150000 audit: BPF prog-id=247 op=LOAD Dec 16 03:17:16.150000 audit[4785]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4742 pid=4785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232333939363230626164613061636235613638383766396238623534 Dec 16 03:17:16.150000 audit: BPF prog-id=247 op=UNLOAD Dec 16 03:17:16.150000 audit[4785]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4742 pid=4785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232333939363230626164613061636235613638383766396238623534 Dec 16 03:17:16.151000 audit: BPF prog-id=248 op=LOAD Dec 16 03:17:16.151000 audit[4785]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4742 pid=4785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232333939363230626164613061636235613638383766396238623534 Dec 16 03:17:16.151000 audit: BPF prog-id=249 op=LOAD Dec 16 03:17:16.151000 audit[4785]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4742 pid=4785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232333939363230626164613061636235613638383766396238623534 Dec 16 03:17:16.151000 audit: BPF prog-id=249 op=UNLOAD Dec 16 03:17:16.151000 audit[4785]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4742 pid=4785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232333939363230626164613061636235613638383766396238623534 Dec 16 03:17:16.151000 audit: BPF prog-id=248 op=UNLOAD Dec 16 03:17:16.151000 audit[4785]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4742 pid=4785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232333939363230626164613061636235613638383766396238623534 Dec 16 03:17:16.151000 audit: BPF prog-id=250 op=LOAD Dec 16 03:17:16.151000 audit[4785]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4742 pid=4785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232333939363230626164613061636235613638383766396238623534 Dec 16 03:17:16.173399 containerd[1630]: time="2025-12-16T03:17:16.173316172Z" level=info msg="StartContainer for \"b2399620bada0acb5a6887f9b8b543cc75898b8ea82dacb1b4cad6e48d472fb5\" returns successfully" Dec 16 03:17:16.351960 systemd-networkd[1541]: calic0456f221d1: Gained IPv6LL Dec 16 03:17:16.672212 systemd-networkd[1541]: cali8c5cb4c2254: Gained IPv6LL Dec 16 03:17:16.737941 containerd[1630]: time="2025-12-16T03:17:16.737694987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z7p8t,Uid:83f66e0e-6c09-4937-932e-1ce867d20286,Namespace:calico-system,Attempt:0,}" Dec 16 03:17:16.908078 systemd-networkd[1541]: cali6beafce33af: Link UP Dec 16 03:17:16.908872 systemd-networkd[1541]: cali6beafce33af: Gained carrier Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.807 [INFO][4819] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--6--1137cb7bd3-k8s-goldmane--666569f655--z7p8t-eth0 goldmane-666569f655- calico-system 83f66e0e-6c09-4937-932e-1ce867d20286 836 0 2025-12-16 03:16:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547-0-0-6-1137cb7bd3 goldmane-666569f655-z7p8t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6beafce33af [] [] }} ContainerID="4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" Namespace="calico-system" Pod="goldmane-666569f655-z7p8t" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-goldmane--666569f655--z7p8t-" Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.808 [INFO][4819] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" Namespace="calico-system" Pod="goldmane-666569f655-z7p8t" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-goldmane--666569f655--z7p8t-eth0" Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.868 [INFO][4830] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" HandleID="k8s-pod-network.4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-goldmane--666569f655--z7p8t-eth0" Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.868 [INFO][4830] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" HandleID="k8s-pod-network.4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-goldmane--666569f655--z7p8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-6-1137cb7bd3", "pod":"goldmane-666569f655-z7p8t", "timestamp":"2025-12-16 03:17:16.868277319 +0000 UTC"}, Hostname:"ci-4547-0-0-6-1137cb7bd3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.868 [INFO][4830] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.868 [INFO][4830] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.868 [INFO][4830] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-6-1137cb7bd3' Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.876 [INFO][4830] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.881 [INFO][4830] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.887 [INFO][4830] ipam/ipam.go 511: Trying affinity for 192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.889 [INFO][4830] ipam/ipam.go 158: Attempting to load block cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.891 [INFO][4830] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.891 [INFO][4830] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.893 [INFO][4830] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993 Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.897 [INFO][4830] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.902 [INFO][4830] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.67.72/26] block=192.168.67.64/26 handle="k8s-pod-network.4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.902 [INFO][4830] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.67.72/26] handle="k8s-pod-network.4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" host="ci-4547-0-0-6-1137cb7bd3" Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.902 [INFO][4830] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:17:16.923897 containerd[1630]: 2025-12-16 03:17:16.902 [INFO][4830] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.67.72/26] IPv6=[] ContainerID="4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" HandleID="k8s-pod-network.4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" Workload="ci--4547--0--0--6--1137cb7bd3-k8s-goldmane--666569f655--z7p8t-eth0" Dec 16 03:17:16.926388 containerd[1630]: 2025-12-16 03:17:16.905 [INFO][4819] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" Namespace="calico-system" Pod="goldmane-666569f655-z7p8t" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-goldmane--666569f655--z7p8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-goldmane--666569f655--z7p8t-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"83f66e0e-6c09-4937-932e-1ce867d20286", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 16, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"", Pod:"goldmane-666569f655-z7p8t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.67.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6beafce33af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:16.926388 containerd[1630]: 2025-12-16 03:17:16.905 [INFO][4819] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.67.72/32] ContainerID="4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" Namespace="calico-system" Pod="goldmane-666569f655-z7p8t" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-goldmane--666569f655--z7p8t-eth0" Dec 16 03:17:16.926388 containerd[1630]: 2025-12-16 03:17:16.905 [INFO][4819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6beafce33af ContainerID="4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" Namespace="calico-system" Pod="goldmane-666569f655-z7p8t" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-goldmane--666569f655--z7p8t-eth0" Dec 16 03:17:16.926388 containerd[1630]: 2025-12-16 03:17:16.909 [INFO][4819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" Namespace="calico-system" Pod="goldmane-666569f655-z7p8t" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-goldmane--666569f655--z7p8t-eth0" Dec 16 03:17:16.926388 containerd[1630]: 2025-12-16 03:17:16.909 [INFO][4819] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" Namespace="calico-system" Pod="goldmane-666569f655-z7p8t" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-goldmane--666569f655--z7p8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--6--1137cb7bd3-k8s-goldmane--666569f655--z7p8t-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"83f66e0e-6c09-4937-932e-1ce867d20286", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 16, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-6-1137cb7bd3", ContainerID:"4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993", Pod:"goldmane-666569f655-z7p8t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.67.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6beafce33af", MAC:"da:b9:cd:7f:b1:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:17:16.926388 containerd[1630]: 2025-12-16 03:17:16.919 [INFO][4819] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" Namespace="calico-system" Pod="goldmane-666569f655-z7p8t" WorkloadEndpoint="ci--4547--0--0--6--1137cb7bd3-k8s-goldmane--666569f655--z7p8t-eth0" Dec 16 03:17:16.953000 audit[4844]: NETFILTER_CFG table=filter:131 family=2 entries=70 op=nft_register_chain pid=4844 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:17:16.953000 audit[4844]: SYSCALL arch=c000003e syscall=46 success=yes exit=33956 a0=3 a1=7ffcea3edbb0 a2=0 a3=7ffcea3edb9c items=0 ppid=4321 pid=4844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:16.953000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:17:16.972366 containerd[1630]: time="2025-12-16T03:17:16.971927075Z" level=info msg="connecting to shim 4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993" address="unix:///run/containerd/s/1619ed84e7cb0e2c7d70c5c2b7cd6eb58ef52b30978885bf898c32bff97d1a6f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:17:17.001773 kubelet[2816]: E1216 03:17:17.000987 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:17:17.005776 kubelet[2816]: E1216 03:17:17.005418 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:17:17.008106 systemd[1]: Started cri-containerd-4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993.scope - libcontainer container 4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993. Dec 16 03:17:17.038343 kubelet[2816]: I1216 03:17:17.037304 2816 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-j4ql6" podStartSLOduration=44.037287155 podStartE2EDuration="44.037287155s" podCreationTimestamp="2025-12-16 03:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:17:17.015303129 +0000 UTC m=+50.384193837" watchObservedRunningTime="2025-12-16 03:17:17.037287155 +0000 UTC m=+50.406177862" Dec 16 03:17:17.055000 audit: BPF prog-id=251 op=LOAD Dec 16 03:17:17.056000 audit: BPF prog-id=252 op=LOAD Dec 16 03:17:17.056000 audit[4863]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4853 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:17.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393063313333663637343537626563333231343465666634343533 Dec 16 03:17:17.056000 audit: BPF prog-id=252 op=UNLOAD Dec 16 03:17:17.056000 audit[4863]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4853 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:17.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393063313333663637343537626563333231343465666634343533 Dec 16 03:17:17.056000 audit: BPF prog-id=253 op=LOAD Dec 16 03:17:17.056000 audit[4863]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4853 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:17.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393063313333663637343537626563333231343465666634343533 Dec 16 03:17:17.056000 audit: BPF prog-id=254 op=LOAD Dec 16 03:17:17.056000 audit[4863]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4853 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:17.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393063313333663637343537626563333231343465666634343533 Dec 16 03:17:17.056000 audit: BPF prog-id=254 op=UNLOAD Dec 16 03:17:17.056000 audit[4863]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4853 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:17.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393063313333663637343537626563333231343465666634343533 Dec 16 03:17:17.056000 audit: BPF prog-id=253 op=UNLOAD Dec 16 03:17:17.056000 audit[4863]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4853 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:17.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393063313333663637343537626563333231343465666634343533 Dec 16 03:17:17.056000 audit: BPF prog-id=255 op=LOAD Dec 16 03:17:17.056000 audit[4863]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4853 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:17.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436393063313333663637343537626563333231343465666634343533 Dec 16 03:17:17.061000 audit[4885]: NETFILTER_CFG table=filter:132 family=2 entries=17 op=nft_register_rule pid=4885 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:17.061000 audit[4885]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcfc3446e0 a2=0 a3=7ffcfc3446cc items=0 ppid=2942 pid=4885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:17.061000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:17.069000 audit[4885]: NETFILTER_CFG table=nat:133 family=2 entries=35 op=nft_register_chain pid=4885 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:17.069000 audit[4885]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcfc3446e0 a2=0 a3=7ffcfc3446cc items=0 ppid=2942 pid=4885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:17.069000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:17.107328 containerd[1630]: time="2025-12-16T03:17:17.107188506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-z7p8t,Uid:83f66e0e-6c09-4937-932e-1ce867d20286,Namespace:calico-system,Attempt:0,} returns sandbox id \"4690c133f67457bec32144eff4453ca502b1abd6b7bf3dcddc05abacdf646993\"" Dec 16 03:17:17.112722 containerd[1630]: time="2025-12-16T03:17:17.112683938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:17:17.121056 systemd-networkd[1541]: cali2e8d64386c6: Gained IPv6LL Dec 16 03:17:17.375945 systemd-networkd[1541]: calica4259ef924: Gained IPv6LL Dec 16 03:17:17.550824 containerd[1630]: time="2025-12-16T03:17:17.550747761Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:17.552345 containerd[1630]: time="2025-12-16T03:17:17.552262648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:17:17.552345 containerd[1630]: time="2025-12-16T03:17:17.552304130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:17.552632 kubelet[2816]: E1216 03:17:17.552597 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:17:17.552820 kubelet[2816]: E1216 03:17:17.552739 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:17:17.553263 kubelet[2816]: E1216 03:17:17.553162 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhglz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z7p8t_calico-system(83f66e0e-6c09-4937-932e-1ce867d20286): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:17.566126 kubelet[2816]: E1216 03:17:17.555016 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:17:18.038470 kubelet[2816]: E1216 03:17:18.038215 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:17:18.108000 audit[4894]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=4894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:18.108000 audit[4894]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc5e9b6d60 a2=0 a3=7ffc5e9b6d4c items=0 ppid=2942 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:18.108000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:18.121000 audit[4894]: NETFILTER_CFG table=nat:135 family=2 entries=56 op=nft_register_chain pid=4894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:18.121000 audit[4894]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc5e9b6d60 a2=0 a3=7ffc5e9b6d4c items=0 ppid=2942 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:18.121000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:18.401185 systemd-networkd[1541]: cali6beafce33af: Gained IPv6LL Dec 16 03:17:19.001553 kubelet[2816]: E1216 03:17:19.001461 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:17:19.139000 audit[4897]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=4897 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:19.139000 audit[4897]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff67f411d0 a2=0 a3=7fff67f411bc items=0 ppid=2942 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:19.139000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:19.143000 audit[4897]: NETFILTER_CFG table=nat:137 family=2 entries=20 op=nft_register_rule pid=4897 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:17:19.143000 audit[4897]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff67f411d0 a2=0 a3=7fff67f411bc items=0 ppid=2942 pid=4897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:17:19.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:17:26.899097 containerd[1630]: time="2025-12-16T03:17:26.898881058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:17:27.329826 containerd[1630]: time="2025-12-16T03:17:27.329608728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:27.331225 containerd[1630]: time="2025-12-16T03:17:27.331124829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:17:27.331422 containerd[1630]: time="2025-12-16T03:17:27.331316775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:27.331486 kubelet[2816]: E1216 03:17:27.331369 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:17:27.331486 kubelet[2816]: E1216 03:17:27.331417 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:17:27.332833 kubelet[2816]: E1216 03:17:27.331522 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e489f2e44bb74625b167ec1e6d4af2e7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fj9zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c6d594499-pd2v8_calico-system(4cefbc96-7243-4733-b6b3-1ddb2b5191b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:27.334010 containerd[1630]: time="2025-12-16T03:17:27.333977544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:17:27.753117 containerd[1630]: time="2025-12-16T03:17:27.753055900Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:27.754216 containerd[1630]: time="2025-12-16T03:17:27.754174847Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:17:27.754389 containerd[1630]: time="2025-12-16T03:17:27.754252980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:27.754451 kubelet[2816]: E1216 03:17:27.754357 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:17:27.754451 kubelet[2816]: E1216 03:17:27.754397 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:17:27.754555 kubelet[2816]: E1216 03:17:27.754504 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fj9zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c6d594499-pd2v8_calico-system(4cefbc96-7243-4733-b6b3-1ddb2b5191b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:27.755970 kubelet[2816]: E1216 03:17:27.755933 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c6d594499-pd2v8" podUID="4cefbc96-7243-4733-b6b3-1ddb2b5191b3" Dec 16 03:17:29.737592 containerd[1630]: time="2025-12-16T03:17:29.737389038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:17:30.185962 containerd[1630]: time="2025-12-16T03:17:30.185703613Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:30.187389 containerd[1630]: time="2025-12-16T03:17:30.187254639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:17:30.187478 containerd[1630]: time="2025-12-16T03:17:30.187395464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:30.187744 kubelet[2816]: E1216 03:17:30.187649 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:17:30.188230 kubelet[2816]: E1216 03:17:30.187785 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:17:30.188371 kubelet[2816]: E1216 03:17:30.188233 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsgrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9b7f5fc68-z5vrj_calico-apiserver(0981f349-361d-45e9-bda1-a29e4e4386d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:30.189652 kubelet[2816]: E1216 03:17:30.189492 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:17:30.190408 containerd[1630]: time="2025-12-16T03:17:30.190372080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:17:30.634691 containerd[1630]: time="2025-12-16T03:17:30.634591785Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:30.636762 containerd[1630]: time="2025-12-16T03:17:30.636659669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:17:30.636903 containerd[1630]: time="2025-12-16T03:17:30.636864308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:30.637096 kubelet[2816]: E1216 03:17:30.637025 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:17:30.637178 kubelet[2816]: E1216 03:17:30.637099 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:17:30.637551 kubelet[2816]: E1216 03:17:30.637455 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c989,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-dcc4656ff-vfbkw_calico-system(2deef273-b182-480c-9527-049c0bd96660): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:30.639021 containerd[1630]: time="2025-12-16T03:17:30.638947913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:17:30.639591 kubelet[2816]: E1216 03:17:30.639510 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:17:31.063912 containerd[1630]: time="2025-12-16T03:17:31.063850544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:31.065407 containerd[1630]: time="2025-12-16T03:17:31.065355138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:17:31.065603 containerd[1630]: time="2025-12-16T03:17:31.065379946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:31.065700 kubelet[2816]: E1216 03:17:31.065605 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:17:31.065700 kubelet[2816]: E1216 03:17:31.065658 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:17:31.066120 kubelet[2816]: E1216 03:17:31.066066 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmlln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vbf4g_calico-system(8f212018-4b88-48fc-94d2-420427ed0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:31.066859 containerd[1630]: time="2025-12-16T03:17:31.066821918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:17:31.492989 containerd[1630]: time="2025-12-16T03:17:31.492935687Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:31.494469 containerd[1630]: time="2025-12-16T03:17:31.494409801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:17:31.494554 containerd[1630]: time="2025-12-16T03:17:31.494489707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:31.494722 kubelet[2816]: E1216 03:17:31.494681 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:17:31.495011 kubelet[2816]: E1216 03:17:31.494731 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:17:31.495065 kubelet[2816]: E1216 03:17:31.495026 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2vvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9b7f5fc68-6vh4x_calico-apiserver(f3b4d493-b815-435b-8539-393930301f5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:31.495718 containerd[1630]: time="2025-12-16T03:17:31.495688354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:17:31.497106 kubelet[2816]: E1216 03:17:31.497071 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:17:31.945266 containerd[1630]: time="2025-12-16T03:17:31.945134346Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:31.946556 containerd[1630]: time="2025-12-16T03:17:31.946518414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:17:31.946954 containerd[1630]: time="2025-12-16T03:17:31.946593991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:31.947017 kubelet[2816]: E1216 03:17:31.946727 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:17:31.947017 kubelet[2816]: E1216 03:17:31.946801 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:17:31.947091 kubelet[2816]: E1216 03:17:31.947029 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhglz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z7p8t_calico-system(83f66e0e-6c09-4937-932e-1ce867d20286): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:31.947343 containerd[1630]: time="2025-12-16T03:17:31.947323654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:17:31.948776 kubelet[2816]: E1216 03:17:31.948717 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:17:32.374167 containerd[1630]: time="2025-12-16T03:17:32.374117399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:32.375541 containerd[1630]: time="2025-12-16T03:17:32.375409487Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:17:32.375541 containerd[1630]: time="2025-12-16T03:17:32.375479883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:32.375747 kubelet[2816]: E1216 03:17:32.375679 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:17:32.375818 kubelet[2816]: E1216 03:17:32.375730 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:17:32.376264 kubelet[2816]: E1216 03:17:32.375904 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmlln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vbf4g_calico-system(8f212018-4b88-48fc-94d2-420427ed0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:32.377559 kubelet[2816]: E1216 03:17:32.377514 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:17:38.736745 kubelet[2816]: E1216 03:17:38.736633 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c6d594499-pd2v8" podUID="4cefbc96-7243-4733-b6b3-1ddb2b5191b3" Dec 16 03:17:42.738394 kubelet[2816]: E1216 03:17:42.737744 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:17:42.738394 kubelet[2816]: E1216 03:17:42.738240 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:17:43.735677 kubelet[2816]: E1216 03:17:43.735626 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:17:44.738167 kubelet[2816]: E1216 03:17:44.738123 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:17:45.750078 kubelet[2816]: E1216 03:17:45.749993 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:17:53.738601 containerd[1630]: time="2025-12-16T03:17:53.738541620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:17:54.159834 containerd[1630]: time="2025-12-16T03:17:54.159700301Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:54.160948 containerd[1630]: time="2025-12-16T03:17:54.160740991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:17:54.161114 containerd[1630]: time="2025-12-16T03:17:54.160913970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:54.161207 kubelet[2816]: E1216 03:17:54.161154 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:17:54.161207 kubelet[2816]: E1216 03:17:54.161205 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:17:54.161953 kubelet[2816]: E1216 03:17:54.161314 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e489f2e44bb74625b167ec1e6d4af2e7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fj9zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c6d594499-pd2v8_calico-system(4cefbc96-7243-4733-b6b3-1ddb2b5191b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:54.164879 containerd[1630]: time="2025-12-16T03:17:54.164850732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:17:54.596014 containerd[1630]: time="2025-12-16T03:17:54.595963069Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:54.597362 containerd[1630]: time="2025-12-16T03:17:54.597312708Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:17:54.597450 containerd[1630]: time="2025-12-16T03:17:54.597360006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:54.597737 kubelet[2816]: E1216 03:17:54.597641 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:17:54.597737 kubelet[2816]: E1216 03:17:54.597702 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:17:54.598446 kubelet[2816]: E1216 03:17:54.598375 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fj9zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c6d594499-pd2v8_calico-system(4cefbc96-7243-4733-b6b3-1ddb2b5191b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:54.599772 kubelet[2816]: E1216 03:17:54.599713 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c6d594499-pd2v8" podUID="4cefbc96-7243-4733-b6b3-1ddb2b5191b3" Dec 16 03:17:55.736800 containerd[1630]: time="2025-12-16T03:17:55.736465519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:17:56.166190 containerd[1630]: time="2025-12-16T03:17:56.166066198Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:56.167301 containerd[1630]: time="2025-12-16T03:17:56.167243425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:17:56.167394 containerd[1630]: time="2025-12-16T03:17:56.167360993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:56.168586 kubelet[2816]: E1216 03:17:56.167572 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:17:56.168853 kubelet[2816]: E1216 03:17:56.168605 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:17:56.169177 kubelet[2816]: E1216 03:17:56.169110 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2vvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9b7f5fc68-6vh4x_calico-apiserver(f3b4d493-b815-435b-8539-393930301f5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:56.170343 kubelet[2816]: E1216 03:17:56.170308 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:17:57.735770 containerd[1630]: time="2025-12-16T03:17:57.735570154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:17:58.173656 containerd[1630]: time="2025-12-16T03:17:58.173383539Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:58.174623 containerd[1630]: time="2025-12-16T03:17:58.174577973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:17:58.174712 containerd[1630]: time="2025-12-16T03:17:58.174657872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:58.174910 kubelet[2816]: E1216 03:17:58.174847 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:17:58.175294 kubelet[2816]: E1216 03:17:58.174918 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:17:58.175294 kubelet[2816]: E1216 03:17:58.175079 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c989,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-dcc4656ff-vfbkw_calico-system(2deef273-b182-480c-9527-049c0bd96660): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:58.176606 kubelet[2816]: E1216 03:17:58.176570 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:17:58.737729 containerd[1630]: time="2025-12-16T03:17:58.737675195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:17:59.173024 containerd[1630]: time="2025-12-16T03:17:59.172795421Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:59.174193 containerd[1630]: time="2025-12-16T03:17:59.174150155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:17:59.174343 containerd[1630]: time="2025-12-16T03:17:59.174296907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:59.174659 kubelet[2816]: E1216 03:17:59.174607 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:17:59.174823 kubelet[2816]: E1216 03:17:59.174805 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:17:59.175256 kubelet[2816]: E1216 03:17:59.175137 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsgrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9b7f5fc68-z5vrj_calico-apiserver(0981f349-361d-45e9-bda1-a29e4e4386d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:59.176091 containerd[1630]: time="2025-12-16T03:17:59.176050362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:17:59.176708 kubelet[2816]: E1216 03:17:59.176453 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:17:59.601545 containerd[1630]: time="2025-12-16T03:17:59.601345505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:17:59.602531 containerd[1630]: time="2025-12-16T03:17:59.602430969Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:17:59.602531 containerd[1630]: time="2025-12-16T03:17:59.602508543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:17:59.603783 kubelet[2816]: E1216 03:17:59.602852 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:17:59.603783 kubelet[2816]: E1216 03:17:59.602927 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:17:59.603783 kubelet[2816]: E1216 03:17:59.603039 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmlln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vbf4g_calico-system(8f212018-4b88-48fc-94d2-420427ed0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:17:59.606713 containerd[1630]: time="2025-12-16T03:17:59.606656311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:18:00.073380 containerd[1630]: time="2025-12-16T03:18:00.073308496Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:00.074717 containerd[1630]: time="2025-12-16T03:18:00.074661209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:18:00.074832 containerd[1630]: time="2025-12-16T03:18:00.074788616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:00.075035 kubelet[2816]: E1216 03:18:00.074988 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:18:00.075109 kubelet[2816]: E1216 03:18:00.075049 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:18:00.075346 kubelet[2816]: E1216 03:18:00.075297 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmlln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vbf4g_calico-system(8f212018-4b88-48fc-94d2-420427ed0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:00.076067 containerd[1630]: time="2025-12-16T03:18:00.075938252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:18:00.076608 kubelet[2816]: E1216 03:18:00.076506 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:18:00.521328 containerd[1630]: time="2025-12-16T03:18:00.521271225Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:00.522609 containerd[1630]: time="2025-12-16T03:18:00.522569647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:18:00.522609 containerd[1630]: time="2025-12-16T03:18:00.522631242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:00.522858 kubelet[2816]: E1216 03:18:00.522814 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:18:00.523111 kubelet[2816]: E1216 03:18:00.522875 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:18:00.523111 kubelet[2816]: E1216 03:18:00.523036 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhglz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z7p8t_calico-system(83f66e0e-6c09-4937-932e-1ce867d20286): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:00.524580 kubelet[2816]: E1216 03:18:00.524548 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:18:08.737692 kubelet[2816]: E1216 03:18:08.737287 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:18:09.738391 kubelet[2816]: E1216 03:18:09.737853 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:18:09.739275 kubelet[2816]: E1216 03:18:09.738380 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c6d594499-pd2v8" podUID="4cefbc96-7243-4733-b6b3-1ddb2b5191b3" Dec 16 03:18:10.738377 kubelet[2816]: E1216 03:18:10.737535 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:18:11.736287 kubelet[2816]: E1216 03:18:11.736250 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:18:12.739552 kubelet[2816]: E1216 03:18:12.738658 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:18:20.738097 kubelet[2816]: E1216 03:18:20.737539 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:18:21.736260 kubelet[2816]: E1216 03:18:21.736215 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:18:21.737524 kubelet[2816]: E1216 03:18:21.737485 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c6d594499-pd2v8" podUID="4cefbc96-7243-4733-b6b3-1ddb2b5191b3" Dec 16 03:18:24.739363 kubelet[2816]: E1216 03:18:24.738930 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:18:25.741434 kubelet[2816]: E1216 03:18:25.741376 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:18:25.744630 kubelet[2816]: E1216 03:18:25.744595 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:18:31.737993 kubelet[2816]: E1216 03:18:31.737580 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:18:34.737845 kubelet[2816]: E1216 03:18:34.737793 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:18:35.737008 containerd[1630]: time="2025-12-16T03:18:35.736966339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:18:36.169049 containerd[1630]: time="2025-12-16T03:18:36.168506692Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:36.169936 containerd[1630]: time="2025-12-16T03:18:36.169876766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:18:36.170479 kubelet[2816]: E1216 03:18:36.170394 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:18:36.170479 kubelet[2816]: E1216 03:18:36.170451 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:18:36.172055 kubelet[2816]: E1216 03:18:36.171305 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e489f2e44bb74625b167ec1e6d4af2e7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fj9zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c6d594499-pd2v8_calico-system(4cefbc96-7243-4733-b6b3-1ddb2b5191b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:36.178222 containerd[1630]: time="2025-12-16T03:18:36.169972959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:36.178222 containerd[1630]: time="2025-12-16T03:18:36.175478073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:18:36.597902 containerd[1630]: time="2025-12-16T03:18:36.597803672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:36.599242 containerd[1630]: time="2025-12-16T03:18:36.599183795Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:18:36.599488 containerd[1630]: time="2025-12-16T03:18:36.599319333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:36.599647 kubelet[2816]: E1216 03:18:36.599519 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:18:36.599713 kubelet[2816]: E1216 03:18:36.599641 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:18:36.600027 kubelet[2816]: E1216 03:18:36.599908 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fj9zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c6d594499-pd2v8_calico-system(4cefbc96-7243-4733-b6b3-1ddb2b5191b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:36.601811 kubelet[2816]: E1216 03:18:36.601568 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c6d594499-pd2v8" podUID="4cefbc96-7243-4733-b6b3-1ddb2b5191b3" Dec 16 03:18:38.735973 kubelet[2816]: E1216 03:18:38.735464 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:18:39.747222 containerd[1630]: time="2025-12-16T03:18:39.746898535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:18:40.195811 containerd[1630]: time="2025-12-16T03:18:40.195625283Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:40.197274 containerd[1630]: time="2025-12-16T03:18:40.197233765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:18:40.197398 containerd[1630]: time="2025-12-16T03:18:40.197310511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:40.197497 kubelet[2816]: E1216 03:18:40.197450 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:40.197817 kubelet[2816]: E1216 03:18:40.197502 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:40.197817 kubelet[2816]: E1216 03:18:40.197621 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2vvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9b7f5fc68-6vh4x_calico-apiserver(f3b4d493-b815-435b-8539-393930301f5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:40.198928 kubelet[2816]: E1216 03:18:40.198895 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:18:40.739508 containerd[1630]: time="2025-12-16T03:18:40.739268294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:18:41.170518 containerd[1630]: time="2025-12-16T03:18:41.170300737Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:41.172947 containerd[1630]: time="2025-12-16T03:18:41.172788432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:18:41.172947 containerd[1630]: time="2025-12-16T03:18:41.172890407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:41.173976 kubelet[2816]: E1216 03:18:41.173293 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:18:41.173976 kubelet[2816]: E1216 03:18:41.173346 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:18:41.173976 kubelet[2816]: E1216 03:18:41.173474 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmlln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vbf4g_calico-system(8f212018-4b88-48fc-94d2-420427ed0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:41.177347 containerd[1630]: time="2025-12-16T03:18:41.177291505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:18:41.597787 containerd[1630]: time="2025-12-16T03:18:41.597574689Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:41.598732 containerd[1630]: time="2025-12-16T03:18:41.598681566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:18:41.598848 containerd[1630]: time="2025-12-16T03:18:41.598788169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:41.599031 kubelet[2816]: E1216 03:18:41.598985 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:18:41.599322 kubelet[2816]: E1216 03:18:41.599035 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:18:41.599421 kubelet[2816]: E1216 03:18:41.599175 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmlln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vbf4g_calico-system(8f212018-4b88-48fc-94d2-420427ed0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:41.600591 kubelet[2816]: E1216 03:18:41.600546 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:18:46.738584 containerd[1630]: time="2025-12-16T03:18:46.738469363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:18:47.194594 containerd[1630]: time="2025-12-16T03:18:47.194358311Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:47.195653 containerd[1630]: time="2025-12-16T03:18:47.195619585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:18:47.195733 containerd[1630]: time="2025-12-16T03:18:47.195707684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:47.196770 kubelet[2816]: E1216 03:18:47.195877 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:47.196770 kubelet[2816]: E1216 03:18:47.195937 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:18:47.197280 kubelet[2816]: E1216 03:18:47.196998 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsgrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9b7f5fc68-z5vrj_calico-apiserver(0981f349-361d-45e9-bda1-a29e4e4386d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:47.197470 containerd[1630]: time="2025-12-16T03:18:47.197445557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:18:47.198899 kubelet[2816]: E1216 03:18:47.198852 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:18:47.632091 containerd[1630]: time="2025-12-16T03:18:47.631896489Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:47.632973 containerd[1630]: time="2025-12-16T03:18:47.632908319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:18:47.633276 containerd[1630]: time="2025-12-16T03:18:47.632965869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:47.633620 kubelet[2816]: E1216 03:18:47.633519 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:18:47.633620 kubelet[2816]: E1216 03:18:47.633591 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:18:47.634522 kubelet[2816]: E1216 03:18:47.634432 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c989,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-dcc4656ff-vfbkw_calico-system(2deef273-b182-480c-9527-049c0bd96660): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:47.636800 kubelet[2816]: E1216 03:18:47.635965 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:18:49.742069 kubelet[2816]: E1216 03:18:49.741948 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c6d594499-pd2v8" podUID="4cefbc96-7243-4733-b6b3-1ddb2b5191b3" Dec 16 03:18:50.737587 kubelet[2816]: E1216 03:18:50.737540 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:18:53.736385 containerd[1630]: time="2025-12-16T03:18:53.736289128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:18:54.156875 containerd[1630]: time="2025-12-16T03:18:54.156691251Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:18:54.158522 containerd[1630]: time="2025-12-16T03:18:54.158428238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:18:54.158662 containerd[1630]: time="2025-12-16T03:18:54.158533209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:18:54.158985 kubelet[2816]: E1216 03:18:54.158926 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:18:54.159484 kubelet[2816]: E1216 03:18:54.159019 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:18:54.160418 kubelet[2816]: E1216 03:18:54.160324 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhglz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-z7p8t_calico-system(83f66e0e-6c09-4937-932e-1ce867d20286): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:18:54.161643 kubelet[2816]: E1216 03:18:54.161565 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:18:56.742440 kubelet[2816]: E1216 03:18:56.742183 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:18:57.738181 kubelet[2816]: E1216 03:18:57.738111 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:19:01.738256 kubelet[2816]: E1216 03:19:01.737719 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c6d594499-pd2v8" podUID="4cefbc96-7243-4733-b6b3-1ddb2b5191b3" Dec 16 03:19:01.739072 kubelet[2816]: E1216 03:19:01.738993 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:19:04.114111 systemd[1]: Started sshd@7-65.108.246.88:22-139.178.89.65:45240.service - OpenSSH per-connection server daemon (139.178.89.65:45240). Dec 16 03:19:04.130730 kernel: kauditd_printk_skb: 195 callbacks suppressed Dec 16 03:19:04.130946 kernel: audit: type=1130 audit(1765855144.114:744): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-65.108.246.88:22-139.178.89.65:45240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:04.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-65.108.246.88:22-139.178.89.65:45240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:04.737997 kubelet[2816]: E1216 03:19:04.737944 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:19:05.157000 audit[5054]: USER_ACCT pid=5054 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:05.160900 sshd-session[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:05.164820 kernel: audit: type=1101 audit(1765855145.157:745): pid=5054 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:05.164850 sshd[5054]: Accepted publickey for core from 139.178.89.65 port 45240 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:19:05.158000 audit[5054]: CRED_ACQ pid=5054 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:05.175370 kernel: audit: type=1103 audit(1765855145.158:746): pid=5054 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:05.175417 kernel: audit: type=1006 audit(1765855145.158:747): pid=5054 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 16 03:19:05.172224 systemd-logind[1604]: New session 9 of user core. Dec 16 03:19:05.158000 audit[5054]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff106d6480 a2=3 a3=0 items=0 ppid=1 pid=5054 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:05.177351 kernel: audit: type=1300 audit(1765855145.158:747): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff106d6480 a2=3 a3=0 items=0 ppid=1 pid=5054 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:05.158000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:05.183358 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 03:19:05.186043 kernel: audit: type=1327 audit(1765855145.158:747): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:05.187000 audit[5054]: USER_START pid=5054 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:05.195954 kernel: audit: type=1105 audit(1765855145.187:748): pid=5054 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:05.197471 kernel: audit: type=1103 audit(1765855145.190:749): pid=5060 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:05.190000 audit[5060]: CRED_ACQ pid=5060 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:06.332126 sshd[5060]: Connection closed by 139.178.89.65 port 45240 Dec 16 03:19:06.332896 sshd-session[5054]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:06.337000 audit[5054]: USER_END pid=5054 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:06.340144 systemd-logind[1604]: Session 9 logged out. Waiting for processes to exit. Dec 16 03:19:06.347789 kernel: audit: type=1106 audit(1765855146.337:750): pid=5054 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:06.348796 systemd[1]: sshd@7-65.108.246.88:22-139.178.89.65:45240.service: Deactivated successfully. Dec 16 03:19:06.351428 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 03:19:06.337000 audit[5054]: CRED_DISP pid=5054 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:06.355457 systemd-logind[1604]: Removed session 9. Dec 16 03:19:06.357768 kernel: audit: type=1104 audit(1765855146.337:751): pid=5054 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:06.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-65.108.246.88:22-139.178.89.65:45240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:06.737654 kubelet[2816]: E1216 03:19:06.737349 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:19:08.736639 kubelet[2816]: E1216 03:19:08.736499 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:19:09.737529 kubelet[2816]: E1216 03:19:09.736352 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:19:11.490119 systemd[1]: Started sshd@8-65.108.246.88:22-139.178.89.65:60150.service - OpenSSH per-connection server daemon (139.178.89.65:60150). Dec 16 03:19:11.501911 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:19:11.501952 kernel: audit: type=1130 audit(1765855151.489:753): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-65.108.246.88:22-139.178.89.65:60150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:11.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-65.108.246.88:22-139.178.89.65:60150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:12.353000 audit[5073]: USER_ACCT pid=5073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:12.358085 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:12.369072 kernel: audit: type=1101 audit(1765855152.353:754): pid=5073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:12.369124 sshd[5073]: Accepted publickey for core from 139.178.89.65 port 60150 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:19:12.368394 systemd-logind[1604]: New session 10 of user core. Dec 16 03:19:12.353000 audit[5073]: CRED_ACQ pid=5073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:12.383784 kernel: audit: type=1103 audit(1765855152.353:755): pid=5073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:12.383891 kernel: audit: type=1006 audit(1765855152.353:756): pid=5073 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 03:19:12.382337 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 03:19:12.353000 audit[5073]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc78d089b0 a2=3 a3=0 items=0 ppid=1 pid=5073 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:12.390650 kernel: audit: type=1300 audit(1765855152.353:756): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc78d089b0 a2=3 a3=0 items=0 ppid=1 pid=5073 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:12.399390 kernel: audit: type=1327 audit(1765855152.353:756): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:12.353000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:12.398000 audit[5073]: USER_START pid=5073 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:12.405034 kernel: audit: type=1105 audit(1765855152.398:757): pid=5073 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:12.413000 audit[5077]: CRED_ACQ pid=5077 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:12.423823 kernel: audit: type=1103 audit(1765855152.413:758): pid=5077 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:12.738861 kubelet[2816]: E1216 03:19:12.738815 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:19:12.943221 sshd[5077]: Connection closed by 139.178.89.65 port 60150 Dec 16 03:19:12.943106 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:12.942000 audit[5073]: USER_END pid=5073 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:12.946361 systemd-logind[1604]: Session 10 logged out. Waiting for processes to exit. Dec 16 03:19:12.947027 systemd[1]: sshd@8-65.108.246.88:22-139.178.89.65:60150.service: Deactivated successfully. Dec 16 03:19:12.949555 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 03:19:12.953262 systemd-logind[1604]: Removed session 10. Dec 16 03:19:12.954785 kernel: audit: type=1106 audit(1765855152.942:759): pid=5073 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:12.942000 audit[5073]: CRED_DISP pid=5073 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:12.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-65.108.246.88:22-139.178.89.65:60150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:12.964175 kernel: audit: type=1104 audit(1765855152.942:760): pid=5073 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:16.736395 kubelet[2816]: E1216 03:19:16.736250 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c6d594499-pd2v8" podUID="4cefbc96-7243-4733-b6b3-1ddb2b5191b3" Dec 16 03:19:18.117953 systemd[1]: Started sshd@9-65.108.246.88:22-139.178.89.65:60162.service - OpenSSH per-connection server daemon (139.178.89.65:60162). Dec 16 03:19:18.123876 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:19:18.123974 kernel: audit: type=1130 audit(1765855158.117:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-65.108.246.88:22-139.178.89.65:60162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:18.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-65.108.246.88:22-139.178.89.65:60162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:18.739349 kubelet[2816]: E1216 03:19:18.738885 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:19:18.741371 kubelet[2816]: E1216 03:19:18.740668 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:19:18.955000 audit[5116]: USER_ACCT pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:18.963934 kernel: audit: type=1101 audit(1765855158.955:763): pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:18.964488 sshd[5116]: Accepted publickey for core from 139.178.89.65 port 60162 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:19:18.965341 sshd-session[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:18.963000 audit[5116]: CRED_ACQ pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:18.972779 kernel: audit: type=1103 audit(1765855158.963:764): pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:18.977786 kernel: audit: type=1006 audit(1765855158.963:765): pid=5116 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 03:19:18.977849 kernel: audit: type=1300 audit(1765855158.963:765): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff48231fd0 a2=3 a3=0 items=0 ppid=1 pid=5116 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:18.963000 audit[5116]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff48231fd0 a2=3 a3=0 items=0 ppid=1 pid=5116 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:18.984080 systemd-logind[1604]: New session 11 of user core. Dec 16 03:19:18.963000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:18.987773 kernel: audit: type=1327 audit(1765855158.963:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:18.990244 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 03:19:18.995000 audit[5116]: USER_START pid=5116 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:19.004809 kernel: audit: type=1105 audit(1765855158.995:766): pid=5116 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:19.004000 audit[5120]: CRED_ACQ pid=5120 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:19.015806 kernel: audit: type=1103 audit(1765855159.004:767): pid=5120 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:19.544875 sshd[5120]: Connection closed by 139.178.89.65 port 60162 Dec 16 03:19:19.546897 sshd-session[5116]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:19.550000 audit[5116]: USER_END pid=5116 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:19.556068 systemd[1]: sshd@9-65.108.246.88:22-139.178.89.65:60162.service: Deactivated successfully. Dec 16 03:19:19.559353 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 03:19:19.559773 kernel: audit: type=1106 audit(1765855159.550:768): pid=5116 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:19.550000 audit[5116]: CRED_DISP pid=5116 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:19.564662 systemd-logind[1604]: Session 11 logged out. Waiting for processes to exit. Dec 16 03:19:19.566695 systemd-logind[1604]: Removed session 11. Dec 16 03:19:19.567458 kernel: audit: type=1104 audit(1765855159.550:769): pid=5116 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:19.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-65.108.246.88:22-139.178.89.65:60162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:19.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-65.108.246.88:22-139.178.89.65:60164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:19.713137 systemd[1]: Started sshd@10-65.108.246.88:22-139.178.89.65:60164.service - OpenSSH per-connection server daemon (139.178.89.65:60164). Dec 16 03:19:20.579000 audit[5138]: USER_ACCT pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:20.580947 sshd[5138]: Accepted publickey for core from 139.178.89.65 port 60164 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:19:20.580000 audit[5138]: CRED_ACQ pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:20.580000 audit[5138]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7769a0a0 a2=3 a3=0 items=0 ppid=1 pid=5138 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:20.580000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:20.582464 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:20.587549 systemd-logind[1604]: New session 12 of user core. Dec 16 03:19:20.593927 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 03:19:20.596000 audit[5138]: USER_START pid=5138 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:20.598000 audit[5142]: CRED_ACQ pid=5142 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:20.739833 kubelet[2816]: E1216 03:19:20.738956 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:19:21.215779 sshd[5142]: Connection closed by 139.178.89.65 port 60164 Dec 16 03:19:21.217058 sshd-session[5138]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:21.217000 audit[5138]: USER_END pid=5138 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:21.217000 audit[5138]: CRED_DISP pid=5138 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:21.221299 systemd[1]: sshd@10-65.108.246.88:22-139.178.89.65:60164.service: Deactivated successfully. Dec 16 03:19:21.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-65.108.246.88:22-139.178.89.65:60164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:21.223919 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 03:19:21.225520 systemd-logind[1604]: Session 12 logged out. Waiting for processes to exit. Dec 16 03:19:21.228427 systemd-logind[1604]: Removed session 12. Dec 16 03:19:21.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-65.108.246.88:22-139.178.89.65:56206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:21.384025 systemd[1]: Started sshd@11-65.108.246.88:22-139.178.89.65:56206.service - OpenSSH per-connection server daemon (139.178.89.65:56206). Dec 16 03:19:22.243000 audit[5152]: USER_ACCT pid=5152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:22.245230 sshd[5152]: Accepted publickey for core from 139.178.89.65 port 56206 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:19:22.246000 audit[5152]: CRED_ACQ pid=5152 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:22.246000 audit[5152]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc16c33810 a2=3 a3=0 items=0 ppid=1 pid=5152 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:22.246000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:22.249311 sshd-session[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:22.258126 systemd-logind[1604]: New session 13 of user core. Dec 16 03:19:22.265925 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 03:19:22.270000 audit[5152]: USER_START pid=5152 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:22.273000 audit[5156]: CRED_ACQ pid=5156 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:22.884832 sshd[5156]: Connection closed by 139.178.89.65 port 56206 Dec 16 03:19:22.885489 sshd-session[5152]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:22.888000 audit[5152]: USER_END pid=5152 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:22.888000 audit[5152]: CRED_DISP pid=5152 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:22.893507 systemd[1]: sshd@11-65.108.246.88:22-139.178.89.65:56206.service: Deactivated successfully. Dec 16 03:19:22.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-65.108.246.88:22-139.178.89.65:56206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:22.895630 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 03:19:22.898174 systemd-logind[1604]: Session 13 logged out. Waiting for processes to exit. Dec 16 03:19:22.899130 systemd-logind[1604]: Removed session 13. Dec 16 03:19:23.735619 kubelet[2816]: E1216 03:19:23.735553 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:19:26.737632 kubelet[2816]: E1216 03:19:26.737495 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:19:28.097728 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 03:19:28.098360 kernel: audit: type=1130 audit(1765855168.088:789): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-65.108.246.88:22-139.178.89.65:56214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:28.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-65.108.246.88:22-139.178.89.65:56214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:28.088920 systemd[1]: Started sshd@12-65.108.246.88:22-139.178.89.65:56214.service - OpenSSH per-connection server daemon (139.178.89.65:56214). Dec 16 03:19:29.082000 audit[5171]: USER_ACCT pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:29.084692 sshd-session[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:29.087312 sshd[5171]: Accepted publickey for core from 139.178.89.65 port 56214 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:19:29.098435 kernel: audit: type=1101 audit(1765855169.082:790): pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:29.098500 kernel: audit: type=1103 audit(1765855169.082:791): pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:29.082000 audit[5171]: CRED_ACQ pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:29.103264 systemd-logind[1604]: New session 14 of user core. Dec 16 03:19:29.107933 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 03:19:29.108775 kernel: audit: type=1006 audit(1765855169.083:792): pid=5171 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 03:19:29.083000 audit[5171]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea3011770 a2=3 a3=0 items=0 ppid=1 pid=5171 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:29.083000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:29.118898 kernel: audit: type=1300 audit(1765855169.083:792): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea3011770 a2=3 a3=0 items=0 ppid=1 pid=5171 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:29.118942 kernel: audit: type=1327 audit(1765855169.083:792): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:29.116000 audit[5171]: USER_START pid=5171 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:29.121817 kernel: audit: type=1105 audit(1765855169.116:793): pid=5171 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:29.119000 audit[5175]: CRED_ACQ pid=5175 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:29.128633 kernel: audit: type=1103 audit(1765855169.119:794): pid=5175 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:29.767330 sshd[5175]: Connection closed by 139.178.89.65 port 56214 Dec 16 03:19:29.768963 sshd-session[5171]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:29.770000 audit[5171]: USER_END pid=5171 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:29.789476 systemd[1]: sshd@12-65.108.246.88:22-139.178.89.65:56214.service: Deactivated successfully. Dec 16 03:19:29.789786 kernel: audit: type=1106 audit(1765855169.770:795): pid=5171 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:29.792509 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 03:19:29.770000 audit[5171]: CRED_DISP pid=5171 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:29.800632 systemd-logind[1604]: Session 14 logged out. Waiting for processes to exit. Dec 16 03:19:29.802736 systemd-logind[1604]: Removed session 14. Dec 16 03:19:29.805816 kernel: audit: type=1104 audit(1765855169.770:796): pid=5171 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:29.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-65.108.246.88:22-139.178.89.65:56214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:29.922079 systemd[1]: Started sshd@13-65.108.246.88:22-139.178.89.65:56230.service - OpenSSH per-connection server daemon (139.178.89.65:56230). Dec 16 03:19:29.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-65.108.246.88:22-139.178.89.65:56230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:30.771000 audit[5187]: USER_ACCT pid=5187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:30.772674 sshd[5187]: Accepted publickey for core from 139.178.89.65 port 56230 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:19:30.772000 audit[5187]: CRED_ACQ pid=5187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:30.773000 audit[5187]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8b7244d0 a2=3 a3=0 items=0 ppid=1 pid=5187 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:30.773000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:30.774950 sshd-session[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:30.779921 systemd-logind[1604]: New session 15 of user core. Dec 16 03:19:30.785893 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 03:19:30.788000 audit[5187]: USER_START pid=5187 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:30.790000 audit[5191]: CRED_ACQ pid=5191 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:31.698730 sshd[5191]: Connection closed by 139.178.89.65 port 56230 Dec 16 03:19:31.702043 sshd-session[5187]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:31.704000 audit[5187]: USER_END pid=5187 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:31.704000 audit[5187]: CRED_DISP pid=5187 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:31.708630 systemd[1]: sshd@13-65.108.246.88:22-139.178.89.65:56230.service: Deactivated successfully. Dec 16 03:19:31.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-65.108.246.88:22-139.178.89.65:56230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:31.713095 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 03:19:31.715290 systemd-logind[1604]: Session 15 logged out. Waiting for processes to exit. Dec 16 03:19:31.717598 systemd-logind[1604]: Removed session 15. Dec 16 03:19:31.739660 kubelet[2816]: E1216 03:19:31.739591 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c6d594499-pd2v8" podUID="4cefbc96-7243-4733-b6b3-1ddb2b5191b3" Dec 16 03:19:31.863930 systemd[1]: Started sshd@14-65.108.246.88:22-139.178.89.65:47040.service - OpenSSH per-connection server daemon (139.178.89.65:47040). Dec 16 03:19:31.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-65.108.246.88:22-139.178.89.65:47040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:32.743634 kubelet[2816]: E1216 03:19:32.743320 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:19:32.749000 audit[5201]: USER_ACCT pid=5201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:32.752515 kubelet[2816]: E1216 03:19:32.752263 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:19:32.752665 sshd[5201]: Accepted publickey for core from 139.178.89.65 port 47040 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:19:32.753000 audit[5201]: CRED_ACQ pid=5201 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:32.753000 audit[5201]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0cb69a70 a2=3 a3=0 items=0 ppid=1 pid=5201 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:32.753000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:32.757306 sshd-session[5201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:32.768700 systemd-logind[1604]: New session 16 of user core. Dec 16 03:19:32.779512 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 03:19:32.787000 audit[5201]: USER_START pid=5201 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:32.790000 audit[5205]: CRED_ACQ pid=5205 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:33.735241 kubelet[2816]: E1216 03:19:33.735203 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:19:33.882887 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 16 03:19:33.882996 kernel: audit: type=1325 audit(1765855173.875:813): table=filter:138 family=2 entries=26 op=nft_register_rule pid=5215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:33.875000 audit[5215]: NETFILTER_CFG table=filter:138 family=2 entries=26 op=nft_register_rule pid=5215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:33.887907 kernel: audit: type=1300 audit(1765855173.875:813): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff886be890 a2=0 a3=7fff886be87c items=0 ppid=2942 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:33.875000 audit[5215]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff886be890 a2=0 a3=7fff886be87c items=0 ppid=2942 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:33.875000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:33.895533 kernel: audit: type=1327 audit(1765855173.875:813): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:33.903000 audit[5215]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=5215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:33.903000 audit[5215]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff886be890 a2=0 a3=0 items=0 ppid=2942 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:33.912151 kernel: audit: type=1325 audit(1765855173.903:814): table=nat:139 family=2 entries=20 op=nft_register_rule pid=5215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:33.912366 kernel: audit: type=1300 audit(1765855173.903:814): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff886be890 a2=0 a3=0 items=0 ppid=2942 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:33.903000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:33.919633 kernel: audit: type=1327 audit(1765855173.903:814): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:33.932000 audit[5217]: NETFILTER_CFG table=filter:140 family=2 entries=38 op=nft_register_rule pid=5217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:33.932000 audit[5217]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffdc88e2000 a2=0 a3=7ffdc88e1fec items=0 ppid=2942 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:33.940777 kernel: audit: type=1325 audit(1765855173.932:815): table=filter:140 family=2 entries=38 op=nft_register_rule pid=5217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:33.940836 kernel: audit: type=1300 audit(1765855173.932:815): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffdc88e2000 a2=0 a3=7ffdc88e1fec items=0 ppid=2942 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:33.932000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:33.948263 kernel: audit: type=1327 audit(1765855173.932:815): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:33.940000 audit[5217]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:33.953062 kernel: audit: type=1325 audit(1765855173.940:816): table=nat:141 family=2 entries=20 op=nft_register_rule pid=5217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:33.940000 audit[5217]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdc88e2000 a2=0 a3=0 items=0 ppid=2942 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:33.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:34.019740 sshd[5205]: Connection closed by 139.178.89.65 port 47040 Dec 16 03:19:34.021788 sshd-session[5201]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:34.025000 audit[5201]: USER_END pid=5201 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:34.026000 audit[5201]: CRED_DISP pid=5201 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:34.031181 systemd[1]: sshd@14-65.108.246.88:22-139.178.89.65:47040.service: Deactivated successfully. Dec 16 03:19:34.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-65.108.246.88:22-139.178.89.65:47040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:34.034338 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 03:19:34.039556 systemd-logind[1604]: Session 16 logged out. Waiting for processes to exit. Dec 16 03:19:34.041023 systemd-logind[1604]: Removed session 16. Dec 16 03:19:34.187899 systemd[1]: Started sshd@15-65.108.246.88:22-139.178.89.65:47052.service - OpenSSH per-connection server daemon (139.178.89.65:47052). Dec 16 03:19:34.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-65.108.246.88:22-139.178.89.65:47052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:35.052000 audit[5224]: USER_ACCT pid=5224 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:35.054212 sshd[5224]: Accepted publickey for core from 139.178.89.65 port 47052 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:19:35.054000 audit[5224]: CRED_ACQ pid=5224 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:35.054000 audit[5224]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeee31cde0 a2=3 a3=0 items=0 ppid=1 pid=5224 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:35.054000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:35.056925 sshd-session[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:35.063334 systemd-logind[1604]: New session 17 of user core. Dec 16 03:19:35.069052 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 03:19:35.074000 audit[5224]: USER_START pid=5224 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:35.076000 audit[5228]: CRED_ACQ pid=5228 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:35.818329 sshd[5228]: Connection closed by 139.178.89.65 port 47052 Dec 16 03:19:35.819232 sshd-session[5224]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:35.820000 audit[5224]: USER_END pid=5224 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:35.822000 audit[5224]: CRED_DISP pid=5224 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:35.828424 systemd[1]: sshd@15-65.108.246.88:22-139.178.89.65:47052.service: Deactivated successfully. Dec 16 03:19:35.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-65.108.246.88:22-139.178.89.65:47052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:35.833319 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 03:19:35.835785 systemd-logind[1604]: Session 17 logged out. Waiting for processes to exit. Dec 16 03:19:35.838251 systemd-logind[1604]: Removed session 17. Dec 16 03:19:35.993527 systemd[1]: Started sshd@16-65.108.246.88:22-139.178.89.65:47054.service - OpenSSH per-connection server daemon (139.178.89.65:47054). Dec 16 03:19:35.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-65.108.246.88:22-139.178.89.65:47054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:36.736458 kubelet[2816]: E1216 03:19:36.736377 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:19:36.905000 audit[5238]: USER_ACCT pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:36.907823 sshd[5238]: Accepted publickey for core from 139.178.89.65 port 47054 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:19:36.907000 audit[5238]: CRED_ACQ pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:36.907000 audit[5238]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec0900bf0 a2=3 a3=0 items=0 ppid=1 pid=5238 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:36.907000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:36.911112 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:36.922575 systemd-logind[1604]: New session 18 of user core. Dec 16 03:19:36.925064 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 03:19:36.930000 audit[5238]: USER_START pid=5238 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:36.933000 audit[5242]: CRED_ACQ pid=5242 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:37.504819 sshd[5242]: Connection closed by 139.178.89.65 port 47054 Dec 16 03:19:37.505603 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:37.506000 audit[5238]: USER_END pid=5238 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:37.506000 audit[5238]: CRED_DISP pid=5238 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:37.510386 systemd[1]: sshd@16-65.108.246.88:22-139.178.89.65:47054.service: Deactivated successfully. Dec 16 03:19:37.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-65.108.246.88:22-139.178.89.65:47054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:37.512515 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 03:19:37.514263 systemd-logind[1604]: Session 18 logged out. Waiting for processes to exit. Dec 16 03:19:37.516090 systemd-logind[1604]: Removed session 18. Dec 16 03:19:39.722696 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 16 03:19:39.722866 kernel: audit: type=1325 audit(1765855179.709:838): table=filter:142 family=2 entries=26 op=nft_register_rule pid=5254 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:39.709000 audit[5254]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5254 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:39.709000 audit[5254]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd6034c1a0 a2=0 a3=7ffd6034c18c items=0 ppid=2942 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:39.734902 kernel: audit: type=1300 audit(1765855179.709:838): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd6034c1a0 a2=0 a3=7ffd6034c18c items=0 ppid=2942 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:39.709000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:39.748476 kernel: audit: type=1327 audit(1765855179.709:838): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:39.748534 kernel: audit: type=1325 audit(1765855179.740:839): table=nat:143 family=2 entries=104 op=nft_register_chain pid=5254 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:39.740000 audit[5254]: NETFILTER_CFG table=nat:143 family=2 entries=104 op=nft_register_chain pid=5254 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:19:39.758259 kernel: audit: type=1300 audit(1765855179.740:839): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd6034c1a0 a2=0 a3=7ffd6034c18c items=0 ppid=2942 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:39.740000 audit[5254]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd6034c1a0 a2=0 a3=7ffd6034c18c items=0 ppid=2942 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:39.764849 kernel: audit: type=1327 audit(1765855179.740:839): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:39.740000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:19:40.736633 kubelet[2816]: E1216 03:19:40.735776 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:19:42.680781 systemd[1]: Started sshd@17-65.108.246.88:22-139.178.89.65:60250.service - OpenSSH per-connection server daemon (139.178.89.65:60250). Dec 16 03:19:42.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-65.108.246.88:22-139.178.89.65:60250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:42.696956 kernel: audit: type=1130 audit(1765855182.681:840): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-65.108.246.88:22-139.178.89.65:60250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:43.544000 audit[5256]: USER_ACCT pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:43.546400 sshd[5256]: Accepted publickey for core from 139.178.89.65 port 60250 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:19:43.548576 sshd-session[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:43.554758 kernel: audit: type=1101 audit(1765855183.544:841): pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:43.554809 kernel: audit: type=1103 audit(1765855183.546:842): pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:43.546000 audit[5256]: CRED_ACQ pid=5256 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:43.565966 systemd-logind[1604]: New session 19 of user core. Dec 16 03:19:43.546000 audit[5256]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe78de04d0 a2=3 a3=0 items=0 ppid=1 pid=5256 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:43.546000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:43.566800 kernel: audit: type=1006 audit(1765855183.546:843): pid=5256 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 03:19:43.569999 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 03:19:43.574000 audit[5256]: USER_START pid=5256 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:43.575000 audit[5283]: CRED_ACQ pid=5283 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:43.735585 kubelet[2816]: E1216 03:19:43.735045 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:19:43.736239 kubelet[2816]: E1216 03:19:43.736112 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c6d594499-pd2v8" podUID="4cefbc96-7243-4733-b6b3-1ddb2b5191b3" Dec 16 03:19:44.147897 sshd[5283]: Connection closed by 139.178.89.65 port 60250 Dec 16 03:19:44.149966 sshd-session[5256]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:44.152000 audit[5256]: USER_END pid=5256 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:44.152000 audit[5256]: CRED_DISP pid=5256 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:44.158019 systemd-logind[1604]: Session 19 logged out. Waiting for processes to exit. Dec 16 03:19:44.163001 systemd[1]: sshd@17-65.108.246.88:22-139.178.89.65:60250.service: Deactivated successfully. Dec 16 03:19:44.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-65.108.246.88:22-139.178.89.65:60250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:44.170296 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 03:19:44.178202 systemd-logind[1604]: Removed session 19. Dec 16 03:19:45.736012 kubelet[2816]: E1216 03:19:45.735962 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:19:47.736971 kubelet[2816]: E1216 03:19:47.736929 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:19:49.326914 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 03:19:49.327017 kernel: audit: type=1130 audit(1765855189.322:849): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-65.108.246.88:22-139.178.89.65:60262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:49.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-65.108.246.88:22-139.178.89.65:60262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:49.322909 systemd[1]: Started sshd@18-65.108.246.88:22-139.178.89.65:60262.service - OpenSSH per-connection server daemon (139.178.89.65:60262). Dec 16 03:19:50.165000 audit[5296]: USER_ACCT pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:50.168414 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:50.172736 sshd[5296]: Accepted publickey for core from 139.178.89.65 port 60262 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:19:50.175889 kernel: audit: type=1101 audit(1765855190.165:850): pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:50.175991 kernel: audit: type=1103 audit(1765855190.165:851): pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:50.165000 audit[5296]: CRED_ACQ pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:50.184178 kernel: audit: type=1006 audit(1765855190.165:852): pid=5296 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 16 03:19:50.183655 systemd-logind[1604]: New session 20 of user core. Dec 16 03:19:50.165000 audit[5296]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6203ed70 a2=3 a3=0 items=0 ppid=1 pid=5296 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:50.193222 kernel: audit: type=1300 audit(1765855190.165:852): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6203ed70 a2=3 a3=0 items=0 ppid=1 pid=5296 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:50.165000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:50.200962 kernel: audit: type=1327 audit(1765855190.165:852): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:50.202069 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 03:19:50.212000 audit[5296]: USER_START pid=5296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:50.223830 kernel: audit: type=1105 audit(1765855190.212:853): pid=5296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:50.217000 audit[5300]: CRED_ACQ pid=5300 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:50.231830 kernel: audit: type=1103 audit(1765855190.217:854): pid=5300 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:50.734785 sshd[5300]: Connection closed by 139.178.89.65 port 60262 Dec 16 03:19:50.735152 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:50.738168 kubelet[2816]: E1216 03:19:50.738093 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:19:50.739000 audit[5296]: USER_END pid=5296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:50.749920 kernel: audit: type=1106 audit(1765855190.739:855): pid=5296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:50.748000 audit[5296]: CRED_DISP pid=5296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:50.753430 systemd-logind[1604]: Session 20 logged out. Waiting for processes to exit. Dec 16 03:19:50.754812 systemd[1]: sshd@18-65.108.246.88:22-139.178.89.65:60262.service: Deactivated successfully. Dec 16 03:19:50.757898 kernel: audit: type=1104 audit(1765855190.748:856): pid=5296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:50.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-65.108.246.88:22-139.178.89.65:60262 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:50.760210 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 03:19:50.764298 systemd-logind[1604]: Removed session 20. Dec 16 03:19:54.736813 kubelet[2816]: E1216 03:19:54.736551 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:19:55.909310 systemd[1]: Started sshd@19-65.108.246.88:22-139.178.89.65:44930.service - OpenSSH per-connection server daemon (139.178.89.65:44930). Dec 16 03:19:55.928034 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:19:55.928105 kernel: audit: type=1130 audit(1765855195.908:858): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-65.108.246.88:22-139.178.89.65:44930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:55.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-65.108.246.88:22-139.178.89.65:44930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:56.738504 kubelet[2816]: E1216 03:19:56.738129 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:19:56.805452 sshd[5318]: Accepted publickey for core from 139.178.89.65 port 44930 ssh2: RSA SHA256:JdQf6WDZOhPXCF779Ufx3FYbWfH+nQzsSVz6N71sW2w Dec 16 03:19:56.803000 audit[5318]: USER_ACCT pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:56.814880 kernel: audit: type=1101 audit(1765855196.803:859): pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:56.815333 sshd-session[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:19:56.811000 audit[5318]: CRED_ACQ pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:56.823828 kernel: audit: type=1103 audit(1765855196.811:860): pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:56.829873 systemd-logind[1604]: New session 21 of user core. Dec 16 03:19:56.833763 kernel: audit: type=1006 audit(1765855196.811:861): pid=5318 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 16 03:19:56.811000 audit[5318]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff303e6850 a2=3 a3=0 items=0 ppid=1 pid=5318 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:56.844648 kernel: audit: type=1300 audit(1765855196.811:861): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff303e6850 a2=3 a3=0 items=0 ppid=1 pid=5318 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:19:56.845655 kernel: audit: type=1327 audit(1765855196.811:861): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:56.811000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:19:56.842862 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 03:19:56.858777 kernel: audit: type=1105 audit(1765855196.848:862): pid=5318 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:56.848000 audit[5318]: USER_START pid=5318 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:56.849000 audit[5322]: CRED_ACQ pid=5322 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:56.868796 kernel: audit: type=1103 audit(1765855196.849:863): pid=5322 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:57.422596 sshd[5322]: Connection closed by 139.178.89.65 port 44930 Dec 16 03:19:57.424262 sshd-session[5318]: pam_unix(sshd:session): session closed for user core Dec 16 03:19:57.424000 audit[5318]: USER_END pid=5318 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:57.429743 systemd[1]: sshd@19-65.108.246.88:22-139.178.89.65:44930.service: Deactivated successfully. Dec 16 03:19:57.432378 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 03:19:57.436999 kernel: audit: type=1106 audit(1765855197.424:864): pid=5318 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:57.424000 audit[5318]: CRED_DISP pid=5318 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:57.439983 systemd-logind[1604]: Session 21 logged out. Waiting for processes to exit. Dec 16 03:19:57.441034 systemd-logind[1604]: Removed session 21. Dec 16 03:19:57.445790 kernel: audit: type=1104 audit(1765855197.424:865): pid=5318 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 16 03:19:57.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-65.108.246.88:22-139.178.89.65:44930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:19:57.736882 kubelet[2816]: E1216 03:19:57.736799 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:19:57.739198 containerd[1630]: time="2025-12-16T03:19:57.738822250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:19:58.173701 containerd[1630]: time="2025-12-16T03:19:58.173371805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:19:58.175048 containerd[1630]: time="2025-12-16T03:19:58.174858092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:19:58.175345 containerd[1630]: time="2025-12-16T03:19:58.175008957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:19:58.175705 kubelet[2816]: E1216 03:19:58.175637 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:19:58.176716 kubelet[2816]: E1216 03:19:58.175809 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:19:58.176716 kubelet[2816]: E1216 03:19:58.176303 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e489f2e44bb74625b167ec1e6d4af2e7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fj9zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c6d594499-pd2v8_calico-system(4cefbc96-7243-4733-b6b3-1ddb2b5191b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:19:58.179607 containerd[1630]: time="2025-12-16T03:19:58.179539124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:19:58.798245 containerd[1630]: time="2025-12-16T03:19:58.798199863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:19:58.799238 containerd[1630]: time="2025-12-16T03:19:58.799201162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:19:58.799310 containerd[1630]: time="2025-12-16T03:19:58.799259929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:19:58.799771 kubelet[2816]: E1216 03:19:58.799413 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:19:58.799771 kubelet[2816]: E1216 03:19:58.799479 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:19:58.799839 kubelet[2816]: E1216 03:19:58.799589 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fj9zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c6d594499-pd2v8_calico-system(4cefbc96-7243-4733-b6b3-1ddb2b5191b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:19:58.800981 kubelet[2816]: E1216 03:19:58.800938 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c6d594499-pd2v8" podUID="4cefbc96-7243-4733-b6b3-1ddb2b5191b3" Dec 16 03:19:59.736232 kubelet[2816]: E1216 03:19:59.736087 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:20:02.737130 kubelet[2816]: E1216 03:20:02.736586 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6" Dec 16 03:20:08.738416 containerd[1630]: time="2025-12-16T03:20:08.738202832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:20:09.155195 containerd[1630]: time="2025-12-16T03:20:09.154963818Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:20:09.156721 containerd[1630]: time="2025-12-16T03:20:09.156577527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:20:09.156721 containerd[1630]: time="2025-12-16T03:20:09.156640083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:20:09.156981 kubelet[2816]: E1216 03:20:09.156909 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:20:09.157884 kubelet[2816]: E1216 03:20:09.156980 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:20:09.157884 kubelet[2816]: E1216 03:20:09.157326 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2vvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9b7f5fc68-6vh4x_calico-apiserver(f3b4d493-b815-435b-8539-393930301f5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:20:09.159109 containerd[1630]: time="2025-12-16T03:20:09.158454401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:20:09.159248 kubelet[2816]: E1216 03:20:09.158905 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-6vh4x" podUID="f3b4d493-b815-435b-8539-393930301f5a" Dec 16 03:20:09.583624 containerd[1630]: time="2025-12-16T03:20:09.583545853Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:20:09.584989 containerd[1630]: time="2025-12-16T03:20:09.584845364Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:20:09.584989 containerd[1630]: time="2025-12-16T03:20:09.584904313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:20:09.585404 kubelet[2816]: E1216 03:20:09.585327 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:20:09.585500 kubelet[2816]: E1216 03:20:09.585424 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:20:09.585725 kubelet[2816]: E1216 03:20:09.585628 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c989,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-dcc4656ff-vfbkw_calico-system(2deef273-b182-480c-9527-049c0bd96660): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:20:09.587017 kubelet[2816]: E1216 03:20:09.586969 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-dcc4656ff-vfbkw" podUID="2deef273-b182-480c-9527-049c0bd96660" Dec 16 03:20:09.737992 kubelet[2816]: E1216 03:20:09.737901 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c6d594499-pd2v8" podUID="4cefbc96-7243-4733-b6b3-1ddb2b5191b3" Dec 16 03:20:11.735149 kubelet[2816]: E1216 03:20:11.735103 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-z7p8t" podUID="83f66e0e-6c09-4937-932e-1ce867d20286" Dec 16 03:20:11.736447 containerd[1630]: time="2025-12-16T03:20:11.735180737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:20:12.171803 containerd[1630]: time="2025-12-16T03:20:12.171669823Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:20:12.173033 containerd[1630]: time="2025-12-16T03:20:12.172996718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:20:12.173033 containerd[1630]: time="2025-12-16T03:20:12.173061338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:20:12.173222 kubelet[2816]: E1216 03:20:12.173178 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:20:12.173297 kubelet[2816]: E1216 03:20:12.173227 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:20:12.173381 kubelet[2816]: E1216 03:20:12.173330 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmlln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vbf4g_calico-system(8f212018-4b88-48fc-94d2-420427ed0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:20:12.175231 containerd[1630]: time="2025-12-16T03:20:12.175200701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:20:12.608335 containerd[1630]: time="2025-12-16T03:20:12.608284876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:20:12.609985 containerd[1630]: time="2025-12-16T03:20:12.609921684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:20:12.610076 containerd[1630]: time="2025-12-16T03:20:12.610045903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:20:12.610282 kubelet[2816]: E1216 03:20:12.610223 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:20:12.610345 kubelet[2816]: E1216 03:20:12.610286 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:20:12.610591 kubelet[2816]: E1216 03:20:12.610526 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmlln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vbf4g_calico-system(8f212018-4b88-48fc-94d2-420427ed0241): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:20:12.611941 kubelet[2816]: E1216 03:20:12.611876 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vbf4g" podUID="8f212018-4b88-48fc-94d2-420427ed0241" Dec 16 03:20:12.946456 systemd[1]: cri-containerd-c68807966e424b84d40d947c29a0a990fa356c2fb52c7c8319f7dbba42a4e44d.scope: Deactivated successfully. Dec 16 03:20:12.948479 systemd[1]: cri-containerd-c68807966e424b84d40d947c29a0a990fa356c2fb52c7c8319f7dbba42a4e44d.scope: Consumed 2.557s CPU time, 42.3M memory peak, 39.6M read from disk. Dec 16 03:20:12.953446 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:20:12.953516 kernel: audit: type=1334 audit(1765855212.950:867): prog-id=256 op=LOAD Dec 16 03:20:12.950000 audit: BPF prog-id=256 op=LOAD Dec 16 03:20:12.950000 audit: BPF prog-id=83 op=UNLOAD Dec 16 03:20:12.961395 kernel: audit: type=1334 audit(1765855212.950:868): prog-id=83 op=UNLOAD Dec 16 03:20:12.958000 audit: BPF prog-id=98 op=UNLOAD Dec 16 03:20:12.968864 kernel: audit: type=1334 audit(1765855212.958:869): prog-id=98 op=UNLOAD Dec 16 03:20:12.958000 audit: BPF prog-id=102 op=UNLOAD Dec 16 03:20:12.980821 kernel: audit: type=1334 audit(1765855212.958:870): prog-id=102 op=UNLOAD Dec 16 03:20:13.032840 kubelet[2816]: E1216 03:20:13.032777 2816 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:49832->10.0.0.2:2379: read: connection timed out" Dec 16 03:20:13.104294 systemd[1]: cri-containerd-156b3a912d272fac9bd74ccd31a0fc1a65c77f812a7f5edf1aa5fbe9339a0ebf.scope: Deactivated successfully. Dec 16 03:20:13.105980 systemd[1]: cri-containerd-156b3a912d272fac9bd74ccd31a0fc1a65c77f812a7f5edf1aa5fbe9339a0ebf.scope: Consumed 27.086s CPU time, 119.4M memory peak, 41.5M read from disk. Dec 16 03:20:13.107000 audit: BPF prog-id=146 op=UNLOAD Dec 16 03:20:13.107000 audit: BPF prog-id=150 op=UNLOAD Dec 16 03:20:13.111268 kernel: audit: type=1334 audit(1765855213.107:871): prog-id=146 op=UNLOAD Dec 16 03:20:13.111372 kernel: audit: type=1334 audit(1765855213.107:872): prog-id=150 op=UNLOAD Dec 16 03:20:13.137076 containerd[1630]: time="2025-12-16T03:20:13.137042651Z" level=info msg="received container exit event container_id:\"c68807966e424b84d40d947c29a0a990fa356c2fb52c7c8319f7dbba42a4e44d\" id:\"c68807966e424b84d40d947c29a0a990fa356c2fb52c7c8319f7dbba42a4e44d\" pid:2637 exit_status:1 exited_at:{seconds:1765855213 nanos:30476126}" Dec 16 03:20:13.137514 containerd[1630]: time="2025-12-16T03:20:13.137418173Z" level=info msg="received container exit event container_id:\"156b3a912d272fac9bd74ccd31a0fc1a65c77f812a7f5edf1aa5fbe9339a0ebf\" id:\"156b3a912d272fac9bd74ccd31a0fc1a65c77f812a7f5edf1aa5fbe9339a0ebf\" pid:3153 exit_status:1 exited_at:{seconds:1765855213 nanos:111897737}" Dec 16 03:20:13.210830 kubelet[2816]: E1216 03:20:13.190461 2816 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:49644->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-9b7f5fc68-z5vrj.188193ccdd8a2dfe calico-apiserver 1741 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-9b7f5fc68-z5vrj,UID:0981f349-361d-45e9-bda1-a29e4e4386d6,APIVersion:v1,ResourceVersion:821,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4547-0-0-6-1137cb7bd3,},FirstTimestamp:2025-12-16 03:17:17 +0000 UTC,LastTimestamp:2025-12-16 03:20:02.736525675 +0000 UTC m=+216.105416413,Count:11,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-6-1137cb7bd3,}" Dec 16 03:20:13.222075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c68807966e424b84d40d947c29a0a990fa356c2fb52c7c8319f7dbba42a4e44d-rootfs.mount: Deactivated successfully. Dec 16 03:20:13.224667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-156b3a912d272fac9bd74ccd31a0fc1a65c77f812a7f5edf1aa5fbe9339a0ebf-rootfs.mount: Deactivated successfully. Dec 16 03:20:13.578887 kubelet[2816]: I1216 03:20:13.578685 2816 scope.go:117] "RemoveContainer" containerID="156b3a912d272fac9bd74ccd31a0fc1a65c77f812a7f5edf1aa5fbe9339a0ebf" Dec 16 03:20:13.578887 kubelet[2816]: I1216 03:20:13.578863 2816 scope.go:117] "RemoveContainer" containerID="c68807966e424b84d40d947c29a0a990fa356c2fb52c7c8319f7dbba42a4e44d" Dec 16 03:20:13.595931 containerd[1630]: time="2025-12-16T03:20:13.595857285Z" level=info msg="CreateContainer within sandbox \"c454604b4aa47755bb95ce37a7d61e585cf9104e3a224798569a41c09b093c9b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 03:20:13.597995 containerd[1630]: time="2025-12-16T03:20:13.597893971Z" level=info msg="CreateContainer within sandbox \"551c6641841b9af5e8df00dcc7b1577d563da73ad1e337d0401e70779e85bf95\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 03:20:13.657508 containerd[1630]: time="2025-12-16T03:20:13.657243887Z" level=info msg="Container 840011616dad166b3cc81c287b29f14347ce87511f5adb42cf110040d15dbafe: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:20:13.680310 containerd[1630]: time="2025-12-16T03:20:13.680254804Z" level=info msg="Container 4e673c8c520e10ed91f17bf2ef884f1b3461b25ee7b58dfa8b0e02c1b253d519: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:20:13.719411 containerd[1630]: time="2025-12-16T03:20:13.719303460Z" level=info msg="CreateContainer within sandbox \"551c6641841b9af5e8df00dcc7b1577d563da73ad1e337d0401e70779e85bf95\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4e673c8c520e10ed91f17bf2ef884f1b3461b25ee7b58dfa8b0e02c1b253d519\"" Dec 16 03:20:13.720963 containerd[1630]: time="2025-12-16T03:20:13.720844281Z" level=info msg="StartContainer for \"4e673c8c520e10ed91f17bf2ef884f1b3461b25ee7b58dfa8b0e02c1b253d519\"" Dec 16 03:20:13.721198 containerd[1630]: time="2025-12-16T03:20:13.721171134Z" level=info msg="CreateContainer within sandbox \"c454604b4aa47755bb95ce37a7d61e585cf9104e3a224798569a41c09b093c9b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"840011616dad166b3cc81c287b29f14347ce87511f5adb42cf110040d15dbafe\"" Dec 16 03:20:13.722990 containerd[1630]: time="2025-12-16T03:20:13.722937281Z" level=info msg="StartContainer for \"840011616dad166b3cc81c287b29f14347ce87511f5adb42cf110040d15dbafe\"" Dec 16 03:20:13.724089 containerd[1630]: time="2025-12-16T03:20:13.724036899Z" level=info msg="connecting to shim 840011616dad166b3cc81c287b29f14347ce87511f5adb42cf110040d15dbafe" address="unix:///run/containerd/s/c9467190338efe3aa9a7f4cdd250bca1e0fdad25bd6acb7c0b2b0b13982666df" protocol=ttrpc version=3 Dec 16 03:20:13.725215 containerd[1630]: time="2025-12-16T03:20:13.725156824Z" level=info msg="connecting to shim 4e673c8c520e10ed91f17bf2ef884f1b3461b25ee7b58dfa8b0e02c1b253d519" address="unix:///run/containerd/s/5df3d166f563b55d98c01ef544795a5e1f9a0509f20b7e396c4b5d5b124147f3" protocol=ttrpc version=3 Dec 16 03:20:13.762995 systemd[1]: Started cri-containerd-840011616dad166b3cc81c287b29f14347ce87511f5adb42cf110040d15dbafe.scope - libcontainer container 840011616dad166b3cc81c287b29f14347ce87511f5adb42cf110040d15dbafe. Dec 16 03:20:13.771215 systemd[1]: Started cri-containerd-4e673c8c520e10ed91f17bf2ef884f1b3461b25ee7b58dfa8b0e02c1b253d519.scope - libcontainer container 4e673c8c520e10ed91f17bf2ef884f1b3461b25ee7b58dfa8b0e02c1b253d519. Dec 16 03:20:13.796000 audit: BPF prog-id=257 op=LOAD Dec 16 03:20:13.803740 kernel: audit: type=1334 audit(1765855213.796:873): prog-id=257 op=LOAD Dec 16 03:20:13.803838 kernel: audit: type=1334 audit(1765855213.797:874): prog-id=258 op=LOAD Dec 16 03:20:13.797000 audit: BPF prog-id=258 op=LOAD Dec 16 03:20:13.813818 kernel: audit: type=1300 audit(1765855213.797:874): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2514 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:13.797000 audit[5388]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2514 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:13.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465363733633863353230653130656439316631376266326566383834 Dec 16 03:20:13.828516 kernel: audit: type=1327 audit(1765855213.797:874): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465363733633863353230653130656439316631376266326566383834 Dec 16 03:20:13.797000 audit: BPF prog-id=258 op=UNLOAD Dec 16 03:20:13.797000 audit[5388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:13.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465363733633863353230653130656439316631376266326566383834 Dec 16 03:20:13.797000 audit: BPF prog-id=259 op=LOAD Dec 16 03:20:13.797000 audit[5388]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2514 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:13.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465363733633863353230653130656439316631376266326566383834 Dec 16 03:20:13.797000 audit: BPF prog-id=260 op=LOAD Dec 16 03:20:13.797000 audit[5388]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2514 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:13.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465363733633863353230653130656439316631376266326566383834 Dec 16 03:20:13.797000 audit: BPF prog-id=260 op=UNLOAD Dec 16 03:20:13.797000 audit[5388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:13.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465363733633863353230653130656439316631376266326566383834 Dec 16 03:20:13.797000 audit: BPF prog-id=259 op=UNLOAD Dec 16 03:20:13.797000 audit[5388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:13.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465363733633863353230653130656439316631376266326566383834 Dec 16 03:20:13.798000 audit: BPF prog-id=261 op=LOAD Dec 16 03:20:13.798000 audit[5388]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2514 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:13.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465363733633863353230653130656439316631376266326566383834 Dec 16 03:20:13.820000 audit: BPF prog-id=262 op=LOAD Dec 16 03:20:13.821000 audit: BPF prog-id=263 op=LOAD Dec 16 03:20:13.821000 audit[5389]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3098 pid=5389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:13.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834303031313631366461643136366233636338316332383762323966 Dec 16 03:20:13.821000 audit: BPF prog-id=263 op=UNLOAD Dec 16 03:20:13.821000 audit[5389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3098 pid=5389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:13.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834303031313631366461643136366233636338316332383762323966 Dec 16 03:20:13.821000 audit: BPF prog-id=264 op=LOAD Dec 16 03:20:13.821000 audit[5389]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3098 pid=5389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:13.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834303031313631366461643136366233636338316332383762323966 Dec 16 03:20:13.821000 audit: BPF prog-id=265 op=LOAD Dec 16 03:20:13.821000 audit[5389]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3098 pid=5389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:13.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834303031313631366461643136366233636338316332383762323966 Dec 16 03:20:13.821000 audit: BPF prog-id=265 op=UNLOAD Dec 16 03:20:13.821000 audit[5389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3098 pid=5389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:13.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834303031313631366461643136366233636338316332383762323966 Dec 16 03:20:13.821000 audit: BPF prog-id=264 op=UNLOAD Dec 16 03:20:13.821000 audit[5389]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3098 pid=5389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:13.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834303031313631366461643136366233636338316332383762323966 Dec 16 03:20:13.821000 audit: BPF prog-id=266 op=LOAD Dec 16 03:20:13.821000 audit[5389]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3098 pid=5389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:13.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834303031313631366461643136366233636338316332383762323966 Dec 16 03:20:13.862746 containerd[1630]: time="2025-12-16T03:20:13.862681153Z" level=info msg="StartContainer for \"4e673c8c520e10ed91f17bf2ef884f1b3461b25ee7b58dfa8b0e02c1b253d519\" returns successfully" Dec 16 03:20:13.870237 containerd[1630]: time="2025-12-16T03:20:13.869730674Z" level=info msg="StartContainer for \"840011616dad166b3cc81c287b29f14347ce87511f5adb42cf110040d15dbafe\" returns successfully" Dec 16 03:20:14.128298 systemd[1]: cri-containerd-ea559b202d2eb23c511fd3586c2c37061797c852658d1964b1108d3139b6457b.scope: Deactivated successfully. Dec 16 03:20:14.128619 systemd[1]: cri-containerd-ea559b202d2eb23c511fd3586c2c37061797c852658d1964b1108d3139b6457b.scope: Consumed 3.718s CPU time, 87.5M memory peak, 63.9M read from disk. Dec 16 03:20:14.129000 audit: BPF prog-id=267 op=LOAD Dec 16 03:20:14.130000 audit: BPF prog-id=92 op=UNLOAD Dec 16 03:20:14.131610 containerd[1630]: time="2025-12-16T03:20:14.130864579Z" level=info msg="received container exit event container_id:\"ea559b202d2eb23c511fd3586c2c37061797c852658d1964b1108d3139b6457b\" id:\"ea559b202d2eb23c511fd3586c2c37061797c852658d1964b1108d3139b6457b\" pid:2658 exit_status:1 exited_at:{seconds:1765855214 nanos:129949781}" Dec 16 03:20:14.131000 audit: BPF prog-id=103 op=UNLOAD Dec 16 03:20:14.131000 audit: BPF prog-id=107 op=UNLOAD Dec 16 03:20:14.223058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea559b202d2eb23c511fd3586c2c37061797c852658d1964b1108d3139b6457b-rootfs.mount: Deactivated successfully. Dec 16 03:20:14.577930 kubelet[2816]: I1216 03:20:14.577189 2816 scope.go:117] "RemoveContainer" containerID="ea559b202d2eb23c511fd3586c2c37061797c852658d1964b1108d3139b6457b" Dec 16 03:20:14.581438 containerd[1630]: time="2025-12-16T03:20:14.581385194Z" level=info msg="CreateContainer within sandbox \"d49818394ab21648d6cc79ed472cd8c97448bf63fa5f91398acbc113af039305\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 03:20:14.625188 containerd[1630]: time="2025-12-16T03:20:14.624203714Z" level=info msg="Container c4e5557a8f1049f37f14970dd7e0cfceb123dbfdb1361ca5e0c62ad4581f0e9b: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:20:14.632540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount889310392.mount: Deactivated successfully. Dec 16 03:20:14.637158 containerd[1630]: time="2025-12-16T03:20:14.637118175Z" level=info msg="CreateContainer within sandbox \"d49818394ab21648d6cc79ed472cd8c97448bf63fa5f91398acbc113af039305\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c4e5557a8f1049f37f14970dd7e0cfceb123dbfdb1361ca5e0c62ad4581f0e9b\"" Dec 16 03:20:14.638492 containerd[1630]: time="2025-12-16T03:20:14.638471463Z" level=info msg="StartContainer for \"c4e5557a8f1049f37f14970dd7e0cfceb123dbfdb1361ca5e0c62ad4581f0e9b\"" Dec 16 03:20:14.640346 containerd[1630]: time="2025-12-16T03:20:14.640327979Z" level=info msg="connecting to shim c4e5557a8f1049f37f14970dd7e0cfceb123dbfdb1361ca5e0c62ad4581f0e9b" address="unix:///run/containerd/s/be3ffc8131eee1b2a0a83e5990898f26d31554997e3f15fdf3ccad4dffe763d0" protocol=ttrpc version=3 Dec 16 03:20:14.668898 systemd[1]: Started cri-containerd-c4e5557a8f1049f37f14970dd7e0cfceb123dbfdb1361ca5e0c62ad4581f0e9b.scope - libcontainer container c4e5557a8f1049f37f14970dd7e0cfceb123dbfdb1361ca5e0c62ad4581f0e9b. Dec 16 03:20:14.684000 audit: BPF prog-id=268 op=LOAD Dec 16 03:20:14.685000 audit: BPF prog-id=269 op=LOAD Dec 16 03:20:14.685000 audit[5465]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2504 pid=5465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:14.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653535353761386631303439663337663134393730646437653063 Dec 16 03:20:14.685000 audit: BPF prog-id=269 op=UNLOAD Dec 16 03:20:14.685000 audit[5465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=5465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:14.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653535353761386631303439663337663134393730646437653063 Dec 16 03:20:14.685000 audit: BPF prog-id=270 op=LOAD Dec 16 03:20:14.685000 audit[5465]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2504 pid=5465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:14.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653535353761386631303439663337663134393730646437653063 Dec 16 03:20:14.685000 audit: BPF prog-id=271 op=LOAD Dec 16 03:20:14.685000 audit[5465]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2504 pid=5465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:14.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653535353761386631303439663337663134393730646437653063 Dec 16 03:20:14.685000 audit: BPF prog-id=271 op=UNLOAD Dec 16 03:20:14.685000 audit[5465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=5465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:14.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653535353761386631303439663337663134393730646437653063 Dec 16 03:20:14.685000 audit: BPF prog-id=270 op=UNLOAD Dec 16 03:20:14.685000 audit[5465]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2504 pid=5465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:14.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653535353761386631303439663337663134393730646437653063 Dec 16 03:20:14.685000 audit: BPF prog-id=272 op=LOAD Dec 16 03:20:14.685000 audit[5465]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2504 pid=5465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:20:14.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334653535353761386631303439663337663134393730646437653063 Dec 16 03:20:14.723909 containerd[1630]: time="2025-12-16T03:20:14.723820223Z" level=info msg="StartContainer for \"c4e5557a8f1049f37f14970dd7e0cfceb123dbfdb1361ca5e0c62ad4581f0e9b\" returns successfully" Dec 16 03:20:16.735992 containerd[1630]: time="2025-12-16T03:20:16.735940262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:20:17.168153 containerd[1630]: time="2025-12-16T03:20:17.167860948Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:20:17.169293 containerd[1630]: time="2025-12-16T03:20:17.169173765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:20:17.169293 containerd[1630]: time="2025-12-16T03:20:17.169267969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:20:17.169474 kubelet[2816]: E1216 03:20:17.169424 2816 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:20:17.169748 kubelet[2816]: E1216 03:20:17.169472 2816 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:20:17.169748 kubelet[2816]: E1216 03:20:17.169588 2816 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsgrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9b7f5fc68-z5vrj_calico-apiserver(0981f349-361d-45e9-bda1-a29e4e4386d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:20:17.170807 kubelet[2816]: E1216 03:20:17.170769 2816 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9b7f5fc68-z5vrj" podUID="0981f349-361d-45e9-bda1-a29e4e4386d6"