Dec 16 13:58:23.031333 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:18:19 -00 2025 Dec 16 13:58:23.031389 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 13:58:23.031414 kernel: BIOS-provided physical RAM map: Dec 16 13:58:23.031429 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 16 13:58:23.031443 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 16 13:58:23.031458 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 16 13:58:23.031476 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 16 13:58:23.031492 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 16 13:58:23.031507 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd318fff] usable Dec 16 13:58:23.031527 kernel: BIOS-e820: [mem 0x00000000bd319000-0x00000000bd322fff] ACPI data Dec 16 13:58:23.031543 kernel: BIOS-e820: [mem 0x00000000bd323000-0x00000000bf8ecfff] usable Dec 16 13:58:23.031558 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 16 13:58:23.031573 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 16 13:58:23.031589 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 16 13:58:23.031612 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 16 13:58:23.031629 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 16 13:58:23.031647 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 16 13:58:23.031663 kernel: NX (Execute Disable) protection: active Dec 16 13:58:23.031680 kernel: APIC: Static calls initialized Dec 16 13:58:23.031698 kernel: efi: EFI v2.7 by EDK II Dec 16 13:58:23.031715 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd323018 RNG=0xbfb73018 TPMEventLog=0xbd319018 Dec 16 13:58:23.031732 kernel: random: crng init done Dec 16 13:58:23.031749 kernel: secureboot: Secure boot disabled Dec 16 13:58:23.031779 kernel: SMBIOS 2.4 present. Dec 16 13:58:23.031801 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Dec 16 13:58:23.031818 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:58:23.031835 kernel: Hypervisor detected: KVM Dec 16 13:58:23.031852 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 16 13:58:23.031869 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:58:23.031886 kernel: kvm-clock: using sched offset of 11953143235 cycles Dec 16 13:58:23.031904 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:58:23.031923 kernel: tsc: Detected 2299.998 MHz processor Dec 16 13:58:23.031940 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:58:23.031962 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:58:23.031980 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 16 13:58:23.031998 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 16 13:58:23.032016 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:58:23.032033 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 16 13:58:23.032051 kernel: Using GB pages for direct mapping Dec 16 13:58:23.032069 kernel: ACPI: Early table checksum verification disabled Dec 16 13:58:23.032096 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 16 13:58:23.032115 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 16 13:58:23.032133 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 16 13:58:23.032152 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 16 13:58:23.032170 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 16 13:58:23.032192 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Dec 16 13:58:23.032211 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 16 13:58:23.032229 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 16 13:58:23.032247 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 16 13:58:23.032266 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 16 13:58:23.032284 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 16 13:58:23.032306 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 16 13:58:23.032325 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 16 13:58:23.032343 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 16 13:58:23.032368 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 16 13:58:23.032386 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 16 13:58:23.032405 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 16 13:58:23.032423 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 16 13:58:23.032446 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 16 13:58:23.032464 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 16 13:58:23.032482 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 13:58:23.032500 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 16 13:58:23.032519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 16 13:58:23.032538 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Dec 16 13:58:23.032557 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Dec 16 13:58:23.032575 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Dec 16 13:58:23.032598 kernel: Zone ranges: Dec 16 13:58:23.032617 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:58:23.032636 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:58:23.032654 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 16 13:58:23.032673 kernel: Device empty Dec 16 13:58:23.032691 kernel: Movable zone start for each node Dec 16 13:58:23.032709 kernel: Early memory node ranges Dec 16 13:58:23.032731 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 16 13:58:23.032750 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 16 13:58:23.032780 kernel: node 0: [mem 0x0000000000100000-0x00000000bd318fff] Dec 16 13:58:23.032799 kernel: node 0: [mem 0x00000000bd323000-0x00000000bf8ecfff] Dec 16 13:58:23.032817 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 16 13:58:23.032835 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 16 13:58:23.032854 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 16 13:58:23.032872 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:58:23.032895 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 16 13:58:23.032914 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 16 13:58:23.032932 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 16 13:58:23.032951 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 16 13:58:23.032969 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 16 13:58:23.032988 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 16 13:58:23.033006 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:58:23.033029 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:58:23.033054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:58:23.033072 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:58:23.033091 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:58:23.033110 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:58:23.033128 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:58:23.033147 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:58:23.033169 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:58:23.033188 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:58:23.033206 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:58:23.033224 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:58:23.033243 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:58:23.033261 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:58:23.033280 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 16 13:58:23.033298 kernel: Booting paravirtualized kernel on KVM Dec 16 13:58:23.033322 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:58:23.033340 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:58:23.033359 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:58:23.033385 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:58:23.033403 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:58:23.033421 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:58:23.033440 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:58:23.033464 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 13:58:23.033483 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 13:58:23.033502 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:58:23.033520 kernel: Fallback order for Node 0: 0 Dec 16 13:58:23.033539 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Dec 16 13:58:23.033557 kernel: Policy zone: Normal Dec 16 13:58:23.033580 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:58:23.033599 kernel: software IO TLB: area num 2. Dec 16 13:58:23.033636 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:58:23.033659 kernel: Kernel/User page tables isolation: enabled Dec 16 13:58:23.033678 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:58:23.033698 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:58:23.033717 kernel: Dynamic Preempt: voluntary Dec 16 13:58:23.033736 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:58:23.033757 kernel: rcu: RCU event tracing is enabled. Dec 16 13:58:23.033788 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:58:23.033812 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:58:23.033832 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:58:23.033852 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:58:23.033871 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:58:23.033899 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:58:23.033919 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:58:23.033939 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:58:23.033959 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:58:23.033979 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 13:58:23.033999 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:58:23.034019 kernel: Console: colour dummy device 80x25 Dec 16 13:58:23.034042 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:58:23.034062 kernel: ACPI: Core revision 20240827 Dec 16 13:58:23.034082 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:58:23.034101 kernel: x2apic enabled Dec 16 13:58:23.034121 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:58:23.034140 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 16 13:58:23.034160 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 16 13:58:23.034183 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 16 13:58:23.034203 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 16 13:58:23.034223 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 16 13:58:23.034243 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:58:23.034262 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Dec 16 13:58:23.034282 kernel: Spectre V2 : Mitigation: IBRS Dec 16 13:58:23.034302 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:58:23.034325 kernel: RETBleed: Mitigation: IBRS Dec 16 13:58:23.034345 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 13:58:23.034372 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 16 13:58:23.034391 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 13:58:23.034411 kernel: MDS: Mitigation: Clear CPU buffers Dec 16 13:58:23.034430 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:58:23.034450 kernel: active return thunk: its_return_thunk Dec 16 13:58:23.034473 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:58:23.034493 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:58:23.034513 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:58:23.034532 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:58:23.034552 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:58:23.034570 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 16 13:58:23.034586 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:58:23.034606 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:58:23.034624 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:58:23.034642 kernel: landlock: Up and running. Dec 16 13:58:23.036256 kernel: SELinux: Initializing. Dec 16 13:58:23.036276 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:58:23.036293 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:58:23.036311 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 16 13:58:23.036336 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 16 13:58:23.036354 kernel: signal: max sigframe size: 1776 Dec 16 13:58:23.036382 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:58:23.036402 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:58:23.036420 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:58:23.036439 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:58:23.036457 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:58:23.036473 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:58:23.036501 kernel: .... node #0, CPUs: #1 Dec 16 13:58:23.036518 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 16 13:58:23.036536 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 16 13:58:23.036554 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:58:23.036572 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 16 13:58:23.036591 kernel: Memory: 7580388K/7860544K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15556K init, 2484K bss, 274324K reserved, 0K cma-reserved) Dec 16 13:58:23.036613 kernel: devtmpfs: initialized Dec 16 13:58:23.036632 kernel: x86/mm: Memory block size: 128MB Dec 16 13:58:23.036650 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 16 13:58:23.036670 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:58:23.036689 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:58:23.036707 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:58:23.036727 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:58:23.036749 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:58:23.036807 kernel: audit: type=2000 audit(1765893499.696:1): state=initialized audit_enabled=0 res=1 Dec 16 13:58:23.036827 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:58:23.037042 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:58:23.037062 kernel: cpuidle: using governor menu Dec 16 13:58:23.037082 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:58:23.037103 kernel: dca service started, version 1.12.1 Dec 16 13:58:23.037128 kernel: PCI: Using configuration type 1 for base access Dec 16 13:58:23.037148 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:58:23.037169 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:58:23.037189 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:58:23.037209 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:58:23.037230 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:58:23.037250 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:58:23.037272 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:58:23.037289 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:58:23.037306 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 16 13:58:23.037342 kernel: ACPI: Interpreter enabled Dec 16 13:58:23.037404 kernel: ACPI: PM: (supports S0 S3 S5) Dec 16 13:58:23.037448 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:58:23.037489 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:58:23.037509 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 16 13:58:23.037535 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 16 13:58:23.037555 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:58:23.037942 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:58:23.038212 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 16 13:58:23.038481 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 16 13:58:23.038510 kernel: PCI host bridge to bus 0000:00 Dec 16 13:58:23.040571 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:58:23.040898 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:58:23.041147 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:58:23.041395 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 16 13:58:23.041636 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:58:23.041947 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:58:23.042221 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:58:23.042506 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Dec 16 13:58:23.043045 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 16 13:58:23.043892 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Dec 16 13:58:23.044180 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 16 13:58:23.044459 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Dec 16 13:58:23.044732 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 13:58:23.045017 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Dec 16 13:58:23.045279 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Dec 16 13:58:23.045558 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 13:58:23.048100 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Dec 16 13:58:23.048384 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Dec 16 13:58:23.048410 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:58:23.048432 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:58:23.048453 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:58:23.048474 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:58:23.048501 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 16 13:58:23.048522 kernel: iommu: Default domain type: Translated Dec 16 13:58:23.048543 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:58:23.048563 kernel: efivars: Registered efivars operations Dec 16 13:58:23.048584 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:58:23.048604 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:58:23.048624 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 16 13:58:23.048648 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 16 13:58:23.048667 kernel: e820: reserve RAM buffer [mem 0xbd319000-0xbfffffff] Dec 16 13:58:23.048687 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 16 13:58:23.048707 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 16 13:58:23.048727 kernel: vgaarb: loaded Dec 16 13:58:23.048746 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:58:23.048783 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:58:23.048808 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:58:23.048829 kernel: pnp: PnP ACPI init Dec 16 13:58:23.048849 kernel: pnp: PnP ACPI: found 7 devices Dec 16 13:58:23.048870 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:58:23.048890 kernel: NET: Registered PF_INET protocol family Dec 16 13:58:23.048911 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:58:23.048931 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 16 13:58:23.048955 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:58:23.048976 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:58:23.048996 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 13:58:23.049016 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 16 13:58:23.049036 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:58:23.049057 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 13:58:23.049077 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:58:23.049101 kernel: NET: Registered PF_XDP protocol family Dec 16 13:58:23.049353 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:58:23.049604 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:58:23.051831 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:58:23.052102 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 16 13:58:23.052383 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 13:58:23.052418 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:58:23.052438 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:58:23.052457 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 16 13:58:23.052477 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 13:58:23.052497 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 16 13:58:23.052516 kernel: clocksource: Switched to clocksource tsc Dec 16 13:58:23.052535 kernel: Initialise system trusted keyrings Dec 16 13:58:23.052558 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 16 13:58:23.052577 kernel: Key type asymmetric registered Dec 16 13:58:23.052596 kernel: Asymmetric key parser 'x509' registered Dec 16 13:58:23.052615 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:58:23.052634 kernel: io scheduler mq-deadline registered Dec 16 13:58:23.052653 kernel: io scheduler kyber registered Dec 16 13:58:23.052673 kernel: io scheduler bfq registered Dec 16 13:58:23.052696 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:58:23.052716 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 16 13:58:23.053016 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 16 13:58:23.053051 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 16 13:58:23.053326 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 16 13:58:23.053353 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 16 13:58:23.053626 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 16 13:58:23.053658 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:58:23.053679 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:58:23.053698 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 16 13:58:23.053718 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 16 13:58:23.053738 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 16 13:58:23.057152 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 16 13:58:23.057204 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:58:23.057224 kernel: i8042: Warning: Keylock active Dec 16 13:58:23.057241 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:58:23.057261 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:58:23.057559 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 16 13:58:23.057845 kernel: rtc_cmos 00:00: registered as rtc0 Dec 16 13:58:23.058122 kernel: rtc_cmos 00:00: setting system clock to 2025-12-16T13:58:21 UTC (1765893501) Dec 16 13:58:23.058420 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 16 13:58:23.058445 kernel: intel_pstate: CPU model not supported Dec 16 13:58:23.058463 kernel: pstore: Using crash dump compression: deflate Dec 16 13:58:23.058481 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:58:23.058497 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:58:23.058515 kernel: Segment Routing with IPv6 Dec 16 13:58:23.058535 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:58:23.058560 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:58:23.058580 kernel: Key type dns_resolver registered Dec 16 13:58:23.058600 kernel: IPI shorthand broadcast: enabled Dec 16 13:58:23.058619 kernel: sched_clock: Marking stable (1930004145, 147764496)->(2093335460, -15566819) Dec 16 13:58:23.058639 kernel: registered taskstats version 1 Dec 16 13:58:23.058659 kernel: Loading compiled-in X.509 certificates Dec 16 13:58:23.058679 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: aafd1eb27ea805b8231c3bede9210239fae84df8' Dec 16 13:58:23.058703 kernel: Demotion targets for Node 0: null Dec 16 13:58:23.058723 kernel: Key type .fscrypt registered Dec 16 13:58:23.058740 kernel: Key type fscrypt-provisioning registered Dec 16 13:58:23.058758 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:58:23.060561 kernel: ima: Can not allocate sha384 (reason: -2) Dec 16 13:58:23.060581 kernel: ima: No architecture policies found Dec 16 13:58:23.060598 kernel: clk: Disabling unused clocks Dec 16 13:58:23.060624 kernel: Freeing unused kernel image (initmem) memory: 15556K Dec 16 13:58:23.060642 kernel: Write protecting the kernel read-only data: 47104k Dec 16 13:58:23.060658 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 16 13:58:23.060676 kernel: Run /init as init process Dec 16 13:58:23.060855 kernel: with arguments: Dec 16 13:58:23.060874 kernel: /init Dec 16 13:58:23.060893 kernel: with environment: Dec 16 13:58:23.060919 kernel: HOME=/ Dec 16 13:58:23.060937 kernel: TERM=linux Dec 16 13:58:23.060957 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:58:23.060976 kernel: SCSI subsystem initialized Dec 16 13:58:23.061288 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Dec 16 13:58:23.061586 kernel: scsi host0: Virtio SCSI HBA Dec 16 13:58:23.065944 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 16 13:58:23.066260 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Dec 16 13:58:23.066781 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 16 13:58:23.067118 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 16 13:58:23.067427 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 16 13:58:23.067748 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 16 13:58:23.069999 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:58:23.070045 kernel: GPT:25804799 != 33554431 Dec 16 13:58:23.070068 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:58:23.070087 kernel: GPT:25804799 != 33554431 Dec 16 13:58:23.070106 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:58:23.070125 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:58:23.070455 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 16 13:58:23.070483 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:58:23.070508 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:58:23.070529 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:58:23.070550 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 13:58:23.070569 kernel: raid6: avx2x4 gen() 18137 MB/s Dec 16 13:58:23.070589 kernel: raid6: avx2x2 gen() 18193 MB/s Dec 16 13:58:23.070613 kernel: raid6: avx2x1 gen() 14043 MB/s Dec 16 13:58:23.070643 kernel: raid6: using algorithm avx2x2 gen() 18193 MB/s Dec 16 13:58:23.070663 kernel: raid6: .... xor() 18519 MB/s, rmw enabled Dec 16 13:58:23.070684 kernel: raid6: using avx2x2 recovery algorithm Dec 16 13:58:23.070706 kernel: xor: automatically using best checksumming function avx Dec 16 13:58:23.070726 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:58:23.070747 kernel: BTRFS: device fsid 57a8262f-2900-48ba-a17e-aafbd70d59c7 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (156) Dec 16 13:58:23.070789 kernel: BTRFS info (device dm-0): first mount of filesystem 57a8262f-2900-48ba-a17e-aafbd70d59c7 Dec 16 13:58:23.070808 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:58:23.070828 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:58:23.070848 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:58:23.070869 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:58:23.070889 kernel: loop: module loaded Dec 16 13:58:23.070909 kernel: loop0: detected capacity change from 0 to 100528 Dec 16 13:58:23.070935 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:58:23.070956 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:58:23.070983 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:58:23.071006 systemd[1]: Detected virtualization google. Dec 16 13:58:23.071026 systemd[1]: Detected architecture x86-64. Dec 16 13:58:23.071047 systemd[1]: Running in initrd. Dec 16 13:58:23.071072 systemd[1]: No hostname configured, using default hostname. Dec 16 13:58:23.071094 systemd[1]: Hostname set to . Dec 16 13:58:23.071114 systemd[1]: Initializing machine ID from random generator. Dec 16 13:58:23.071135 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:58:23.071157 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:58:23.071178 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:58:23.071200 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:58:23.071227 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:58:23.071249 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:58:23.071272 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:58:23.071294 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:58:23.071316 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:58:23.071340 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:58:23.071362 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:58:23.071383 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:58:23.071406 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:58:23.071438 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:58:23.071460 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:58:23.071482 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:58:23.071504 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:58:23.071527 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:58:23.071550 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:58:23.071573 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:58:23.071599 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:58:23.071631 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:58:23.071654 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:58:23.071675 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:58:23.071697 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:58:23.071718 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:58:23.071739 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:58:23.074677 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:58:23.074714 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:58:23.074734 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:58:23.074754 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:58:23.074790 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:58:23.074822 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:58:23.074844 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:58:23.074867 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:58:23.074888 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:58:23.074909 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:58:23.074974 systemd-journald[293]: Collecting audit messages is enabled. Dec 16 13:58:23.075004 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:58:23.075021 kernel: Bridge firewalling registered Dec 16 13:58:23.075037 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:58:23.075051 kernel: audit: type=1130 audit(1765893503.059:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.075064 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:58:23.075078 systemd-journald[293]: Journal started Dec 16 13:58:23.075105 systemd-journald[293]: Runtime Journal (/run/log/journal/5feb3cac04cd49ca8054aa82b2942ac6) is 8M, max 148.4M, 140.4M free. Dec 16 13:58:23.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.058073 systemd-modules-load[294]: Inserted module 'br_netfilter' Dec 16 13:58:23.083845 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:58:23.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.089888 kernel: audit: type=1130 audit(1765893503.082:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.099028 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:58:23.110891 kernel: audit: type=1130 audit(1765893503.105:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.106997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:58:23.111169 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:58:23.125072 kernel: audit: type=1130 audit(1765893503.117:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.122604 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:58:23.132600 kernel: audit: type=1130 audit(1765893503.126:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.132007 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:58:23.136000 audit: BPF prog-id=6 op=LOAD Dec 16 13:58:23.135519 systemd-tmpfiles[308]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:58:23.142900 kernel: audit: type=1334 audit(1765893503.136:7): prog-id=6 op=LOAD Dec 16 13:58:23.141295 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:58:23.151284 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:58:23.159537 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:58:23.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.164169 kernel: audit: type=1130 audit(1765893503.158:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.182903 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:58:23.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.186852 kernel: audit: type=1130 audit(1765893503.181:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.196712 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:58:23.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.205834 kernel: audit: type=1130 audit(1765893503.201:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.208992 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:58:23.244362 systemd-resolved[316]: Positive Trust Anchors: Dec 16 13:58:23.244928 systemd-resolved[316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:58:23.244938 systemd-resolved[316]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 13:58:23.252679 dracut-cmdline[332]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 13:58:23.245161 systemd-resolved[316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:58:23.299084 systemd-resolved[316]: Defaulting to hostname 'linux'. Dec 16 13:58:23.302046 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:58:23.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.308012 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:58:23.386804 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:58:23.413813 kernel: iscsi: registered transport (tcp) Dec 16 13:58:23.450861 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:58:23.450959 kernel: QLogic iSCSI HBA Driver Dec 16 13:58:23.485175 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:58:23.520519 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:58:23.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.532794 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:58:23.609812 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:58:23.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.612178 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:58:23.627144 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:58:23.708670 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:58:23.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.727000 audit: BPF prog-id=7 op=LOAD Dec 16 13:58:23.727000 audit: BPF prog-id=8 op=LOAD Dec 16 13:58:23.730665 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:58:23.794968 systemd-udevd[594]: Using default interface naming scheme 'v257'. Dec 16 13:58:23.806584 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:58:23.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.815911 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:58:23.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.838231 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:58:23.855000 audit: BPF prog-id=9 op=LOAD Dec 16 13:58:23.857791 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:58:23.908404 dracut-pre-trigger[670]: rd.md=0: removing MD RAID activation Dec 16 13:58:23.949739 systemd-networkd[671]: lo: Link UP Dec 16 13:58:23.950237 systemd-networkd[671]: lo: Gained carrier Dec 16 13:58:23.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.951068 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:58:23.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:23.965334 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:58:23.981858 systemd[1]: Reached target network.target - Network. Dec 16 13:58:23.998953 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:58:24.129147 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:58:24.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:24.155874 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:58:24.299139 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 16 13:58:24.358792 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:58:24.391659 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 16 13:58:24.424660 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 16 13:58:24.477970 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 16 13:58:24.478012 kernel: AES CTR mode by8 optimization enabled Dec 16 13:58:24.505469 systemd-networkd[671]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:58:24.508093 systemd-networkd[671]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:58:24.512402 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 16 13:58:24.514255 systemd-networkd[671]: eth0: Link UP Dec 16 13:58:24.515596 systemd-networkd[671]: eth0: Gained carrier Dec 16 13:58:24.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:24.515614 systemd-networkd[671]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:58:24.531858 systemd-networkd[671]: eth0: DHCPv4 address 10.128.0.79/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 16 13:58:24.541032 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:58:24.653033 disk-uuid[805]: Primary Header is updated. Dec 16 13:58:24.653033 disk-uuid[805]: Secondary Entries is updated. Dec 16 13:58:24.653033 disk-uuid[805]: Secondary Header is updated. Dec 16 13:58:24.555502 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:58:24.555689 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:58:24.580952 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:58:24.596182 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:58:24.716660 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:58:24.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:24.816536 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:58:24.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:24.817907 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:58:24.844866 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:58:24.855047 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:58:24.874681 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:58:24.931003 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:58:24.973913 kernel: kauditd_printk_skb: 15 callbacks suppressed Dec 16 13:58:24.973957 kernel: audit: type=1130 audit(1765893504.929:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:24.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:25.712226 disk-uuid[807]: Warning: The kernel is still using the old partition table. Dec 16 13:58:25.712226 disk-uuid[807]: The new table will be used at the next reboot or after you Dec 16 13:58:25.712226 disk-uuid[807]: run partprobe(8) or kpartx(8) Dec 16 13:58:25.712226 disk-uuid[807]: The operation has completed successfully. Dec 16 13:58:25.799931 kernel: audit: type=1130 audit(1765893505.730:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:25.799969 kernel: audit: type=1131 audit(1765893505.730:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:25.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:25.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:25.719177 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:58:25.719320 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:58:25.736031 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:58:25.839820 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (835) Dec 16 13:58:25.857823 kernel: BTRFS info (device sda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 13:58:25.857921 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:58:25.875148 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:58:25.875247 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:58:25.875274 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:58:25.896814 kernel: BTRFS info (device sda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 13:58:25.897668 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:58:25.935001 kernel: audit: type=1130 audit(1765893505.906:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:25.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:25.912000 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:58:26.119908 systemd-networkd[671]: eth0: Gained IPv6LL Dec 16 13:58:26.181480 ignition[854]: Ignition 2.24.0 Dec 16 13:58:26.181851 ignition[854]: Stage: fetch-offline Dec 16 13:58:26.183794 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:58:26.231972 kernel: audit: type=1130 audit(1765893506.199:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:26.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:26.181917 ignition[854]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:58:26.223621 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:58:26.181936 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:58:26.182069 ignition[854]: parsed url from cmdline: "" Dec 16 13:58:26.182074 ignition[854]: no config URL provided Dec 16 13:58:26.182081 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:58:26.182103 ignition[854]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:58:26.182111 ignition[854]: failed to fetch config: resource requires networking Dec 16 13:58:26.287097 unknown[860]: fetched base config from "system" Dec 16 13:58:26.334898 kernel: audit: type=1130 audit(1765893506.306:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:26.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:26.182373 ignition[854]: Ignition finished successfully Dec 16 13:58:26.287109 unknown[860]: fetched base config from "system" Dec 16 13:58:26.272964 ignition[860]: Ignition 2.24.0 Dec 16 13:58:26.287121 unknown[860]: fetched user config from "gcp" Dec 16 13:58:26.272978 ignition[860]: Stage: fetch Dec 16 13:58:26.290654 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:58:26.273281 ignition[860]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:58:26.415940 kernel: audit: type=1130 audit(1765893506.387:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:26.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:26.310987 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:58:26.273295 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:58:26.379832 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:58:26.273431 ignition[860]: parsed url from cmdline: "" Dec 16 13:58:26.391424 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:58:26.273437 ignition[860]: no config URL provided Dec 16 13:58:26.496910 kernel: audit: type=1130 audit(1765893506.469:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:26.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:26.459528 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:58:26.273452 ignition[860]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:58:26.472122 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:58:26.273463 ignition[860]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:58:26.497191 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:58:26.273501 ignition[860]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 16 13:58:26.514175 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:58:26.276279 ignition[860]: GET result: OK Dec 16 13:58:26.533106 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:58:26.276471 ignition[860]: parsing config with SHA512: 1e7052aa978333589b34f81533cadc6cbdbb76dc8f0fdb02394ceace8e2e47f6ed4ba37234b46b4ed4c00089b16547cf84c9037fb0f1423429393f1a853f7afa Dec 16 13:58:26.550165 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:58:26.287656 ignition[860]: fetch: fetch complete Dec 16 13:58:26.567688 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:58:26.287662 ignition[860]: fetch: fetch passed Dec 16 13:58:26.287722 ignition[860]: Ignition finished successfully Dec 16 13:58:26.377115 ignition[867]: Ignition 2.24.0 Dec 16 13:58:26.377123 ignition[867]: Stage: kargs Dec 16 13:58:26.377301 ignition[867]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:58:26.377312 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:58:26.378376 ignition[867]: kargs: kargs passed Dec 16 13:58:26.378435 ignition[867]: Ignition finished successfully Dec 16 13:58:26.457215 ignition[873]: Ignition 2.24.0 Dec 16 13:58:26.457226 ignition[873]: Stage: disks Dec 16 13:58:26.457406 ignition[873]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:58:26.457418 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:58:26.458314 ignition[873]: disks: disks passed Dec 16 13:58:26.458372 ignition[873]: Ignition finished successfully Dec 16 13:58:26.646395 systemd-fsck[881]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 13:58:26.729480 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:58:26.770141 kernel: audit: type=1130 audit(1765893506.727:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:26.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:26.731914 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:58:26.971801 kernel: EXT4-fs (sda9): mounted filesystem 1314c107-11a5-486b-9d52-be9f57b6bf1b r/w with ordered data mode. Quota mode: none. Dec 16 13:58:26.972869 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:58:26.981548 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:58:27.001017 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:58:27.014782 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:58:27.027376 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:58:27.108979 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (889) Dec 16 13:58:27.109023 kernel: BTRFS info (device sda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 13:58:27.109050 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:58:27.109075 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:58:27.109099 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:58:27.109120 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:58:27.027430 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:58:27.027464 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:58:27.066653 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:58:27.096861 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:58:27.117284 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:58:27.577171 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:58:27.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:27.596941 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:58:27.630988 kernel: audit: type=1130 audit(1765893507.592:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:27.615539 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:58:27.649868 kernel: BTRFS info (device sda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 13:58:27.658124 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:58:27.703245 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:58:27.703697 ignition[985]: INFO : Ignition 2.24.0 Dec 16 13:58:27.703697 ignition[985]: INFO : Stage: mount Dec 16 13:58:27.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:27.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:27.718336 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:58:27.746933 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:58:27.746933 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:58:27.746933 ignition[985]: INFO : mount: mount passed Dec 16 13:58:27.746933 ignition[985]: INFO : Ignition finished successfully Dec 16 13:58:27.735460 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:58:27.975170 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:58:28.016814 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (997) Dec 16 13:58:28.034533 kernel: BTRFS info (device sda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 13:58:28.034625 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:58:28.050906 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:58:28.051023 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:58:28.051052 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:58:28.059515 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:58:28.097164 ignition[1014]: INFO : Ignition 2.24.0 Dec 16 13:58:28.097164 ignition[1014]: INFO : Stage: files Dec 16 13:58:28.110960 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:58:28.110960 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:58:28.110960 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:58:28.110960 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:58:28.110960 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:58:28.110960 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:58:28.110960 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:58:28.110960 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:58:28.110657 unknown[1014]: wrote ssh authorized keys file for user: core Dec 16 13:58:28.205935 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:58:28.205935 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 13:58:28.237979 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:58:28.567356 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:58:28.567356 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:58:28.598913 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:58:28.598913 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:58:28.598913 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:58:28.598913 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:58:28.598913 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:58:28.598913 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:58:28.598913 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:58:28.598913 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:58:28.598913 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:58:28.598913 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:58:28.598913 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:58:28.598913 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:58:28.598913 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 13:58:34.731650 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:58:35.602292 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:58:35.602292 ignition[1014]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:58:35.663989 kernel: kauditd_printk_skb: 2 callbacks suppressed Dec 16 13:58:35.664041 kernel: audit: type=1130 audit(1765893515.618:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:35.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:35.664155 ignition[1014]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:58:35.664155 ignition[1014]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:58:35.664155 ignition[1014]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:58:35.664155 ignition[1014]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:58:35.664155 ignition[1014]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:58:35.664155 ignition[1014]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:58:35.664155 ignition[1014]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:58:35.664155 ignition[1014]: INFO : files: files passed Dec 16 13:58:35.664155 ignition[1014]: INFO : Ignition finished successfully Dec 16 13:58:35.879937 kernel: audit: type=1130 audit(1765893515.768:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:35.879991 kernel: audit: type=1131 audit(1765893515.768:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:35.880019 kernel: audit: type=1130 audit(1765893515.833:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:35.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:35.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:35.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:35.609965 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:58:35.622572 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:58:35.674225 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:58:35.705479 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:58:35.929906 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:58:35.929906 initrd-setup-root-after-ignition[1044]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:58:35.705607 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:58:36.014936 kernel: audit: type=1130 audit(1765893515.964:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.014985 kernel: audit: type=1131 audit(1765893515.964:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:35.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:35.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.015124 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:58:35.814596 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:58:35.835148 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:58:35.865195 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:58:35.950176 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:58:35.950292 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:58:35.966601 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:58:36.024051 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:58:36.046576 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:58:36.163921 kernel: audit: type=1130 audit(1765893516.130:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.047964 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:58:36.117291 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:58:36.134985 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:58:36.206137 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:58:36.206828 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:58:36.234035 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:58:36.253086 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:58:36.253467 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:58:36.304911 kernel: audit: type=1131 audit(1765893516.266:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.253660 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:58:36.305292 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:58:36.315211 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:58:36.331231 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:58:36.345363 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:58:36.362343 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:58:36.381311 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:58:36.398342 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:58:36.431141 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:58:36.431592 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:58:36.451349 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:58:36.467344 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:58:36.496083 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:58:36.496415 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:58:36.541984 kernel: audit: type=1131 audit(1765893516.511:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.542297 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:58:36.542692 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:58:36.560288 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:58:36.560469 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:58:36.635998 kernel: audit: type=1131 audit(1765893516.594:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.579273 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:58:36.579479 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:58:36.636393 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:58:36.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.636656 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:58:36.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.663346 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:58:36.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.663547 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:58:36.682710 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:58:36.737045 ignition[1069]: INFO : Ignition 2.24.0 Dec 16 13:58:36.737045 ignition[1069]: INFO : Stage: umount Dec 16 13:58:36.737045 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:58:36.737045 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 13:58:36.737045 ignition[1069]: INFO : umount: umount passed Dec 16 13:58:36.737045 ignition[1069]: INFO : Ignition finished successfully Dec 16 13:58:36.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.699939 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:58:36.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.700236 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:58:36.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.713888 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:58:36.743923 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:58:36.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.744212 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:58:36.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.783321 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:58:36.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.783566 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:58:36.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.807295 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:58:36.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.807507 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:58:36.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.840041 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:58:36.841560 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:58:36.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.841684 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:58:36.853531 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:58:36.853656 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:58:36.876008 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:58:36.876125 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:58:36.891737 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:58:36.891882 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:58:36.909094 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:58:37.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.909170 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:58:37.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.927076 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:58:37.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.927156 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:58:36.945152 systemd[1]: Stopped target network.target - Network. Dec 16 13:58:36.961039 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:58:37.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:36.961160 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:58:37.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:37.226000 audit: BPF prog-id=6 op=UNLOAD Dec 16 13:58:37.226000 audit: BPF prog-id=9 op=UNLOAD Dec 16 13:58:36.968211 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:58:36.993998 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:58:36.995921 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:58:37.002295 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:58:37.020511 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:58:37.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:37.038549 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:58:37.038647 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:58:37.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:37.056720 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:58:37.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:37.056933 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:58:37.084050 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 13:58:37.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:37.084117 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:58:37.102001 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:58:37.102123 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:58:37.119024 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:58:37.119125 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:58:37.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:37.135018 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:58:37.135142 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:58:37.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:37.153244 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:58:37.172214 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:58:37.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:37.191609 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:58:37.191757 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:58:37.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:37.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:37.208888 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:58:37.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:37.209022 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:58:37.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:37.228501 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:58:37.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:37.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:37.243997 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:58:37.244088 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:58:37.262196 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:58:37.270106 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:58:37.270192 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:58:37.297263 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:58:37.297350 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:58:37.331116 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:58:37.331205 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:58:37.723911 systemd-journald[293]: Received SIGTERM from PID 1 (systemd). Dec 16 13:58:37.347158 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:58:37.365658 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:58:37.365854 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:58:37.376898 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:58:37.377074 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:58:37.402056 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:58:37.402133 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:58:37.429091 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:58:37.429179 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:58:37.463100 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:58:37.463329 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:58:37.488112 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:58:37.488345 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:58:37.517483 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:58:37.532884 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:58:37.533009 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:58:37.544013 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:58:37.544120 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:58:37.555117 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:58:37.555198 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:58:37.576125 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:58:37.576260 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:58:37.593406 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:58:37.593544 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:58:37.613185 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:58:37.631169 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:58:37.677877 systemd[1]: Switching root. Dec 16 13:58:37.971929 systemd-journald[293]: Journal stopped Dec 16 13:58:40.763993 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:58:40.764041 kernel: SELinux: policy capability open_perms=1 Dec 16 13:58:40.764069 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:58:40.764089 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:58:40.764107 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:58:40.764126 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:58:40.764148 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:58:40.764178 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:58:40.764199 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:58:40.764220 systemd[1]: Successfully loaded SELinux policy in 113.425ms. Dec 16 13:58:40.764243 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.625ms. Dec 16 13:58:40.764266 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:58:40.764287 systemd[1]: Detected virtualization google. Dec 16 13:58:40.764312 systemd[1]: Detected architecture x86-64. Dec 16 13:58:40.764335 systemd[1]: Detected first boot. Dec 16 13:58:40.764357 systemd[1]: Initializing machine ID from random generator. Dec 16 13:58:40.764379 zram_generator::config[1112]: No configuration found. Dec 16 13:58:40.764405 kernel: Guest personality initialized and is inactive Dec 16 13:58:40.764425 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:58:40.764446 kernel: Initialized host personality Dec 16 13:58:40.764466 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:58:40.764486 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:58:40.764508 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:58:40.764530 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:58:40.764556 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:58:40.764584 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:58:40.764606 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:58:40.764628 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:58:40.764651 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:58:40.764678 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:58:40.764701 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:58:40.764723 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:58:40.764744 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:58:40.764780 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:58:40.764804 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:58:40.764826 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:58:40.764853 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:58:40.764875 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:58:40.764898 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:58:40.764920 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:58:40.764949 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:58:40.764972 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:58:40.764999 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:58:40.765021 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:58:40.765044 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:58:40.765067 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:58:40.765089 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:58:40.765112 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:58:40.765134 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 13:58:40.765167 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:58:40.765191 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:58:40.765214 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:58:40.765236 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:58:40.765259 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:58:40.765287 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 13:58:40.765310 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 13:58:40.765333 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:58:40.765356 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 13:58:40.765379 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 13:58:40.765406 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:58:40.765428 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:58:40.765451 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:58:40.765475 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:58:40.765497 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:58:40.765520 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:58:40.765543 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:58:40.765570 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:58:40.765593 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:58:40.765616 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:58:40.765639 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:58:40.765662 systemd[1]: Reached target machines.target - Containers. Dec 16 13:58:40.765687 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:58:40.765714 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:58:40.765737 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:58:40.765760 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:58:40.765794 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:58:40.765817 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:58:40.765840 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:58:40.765863 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:58:40.765890 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:58:40.765913 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:58:40.765936 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:58:40.765958 kernel: fuse: init (API version 7.41) Dec 16 13:58:40.765981 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:58:40.766004 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:58:40.766031 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:58:40.766053 kernel: ACPI: bus type drm_connector registered Dec 16 13:58:40.766076 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:58:40.766099 kernel: kauditd_printk_skb: 53 callbacks suppressed Dec 16 13:58:40.766121 kernel: audit: type=1334 audit(1765893520.620:101): prog-id=14 op=UNLOAD Dec 16 13:58:40.766142 kernel: audit: type=1334 audit(1765893520.620:102): prog-id=13 op=UNLOAD Dec 16 13:58:40.766170 kernel: audit: type=1334 audit(1765893520.621:103): prog-id=15 op=LOAD Dec 16 13:58:40.766198 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:58:40.766221 kernel: audit: type=1334 audit(1765893520.625:104): prog-id=16 op=LOAD Dec 16 13:58:40.766242 kernel: audit: type=1334 audit(1765893520.632:105): prog-id=17 op=LOAD Dec 16 13:58:40.766264 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:58:40.766287 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:58:40.766310 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:58:40.766369 systemd-journald[1201]: Collecting audit messages is enabled. Dec 16 13:58:40.766413 kernel: audit: type=1305 audit(1765893520.740:106): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 13:58:40.766436 kernel: audit: type=1300 audit(1765893520.740:106): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffb3b93600 a2=4000 a3=0 items=0 ppid=1 pid=1201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:58:40.766463 systemd-journald[1201]: Journal started Dec 16 13:58:40.766505 systemd-journald[1201]: Runtime Journal (/run/log/journal/940296a71dbd4815830f48066fd780ae) is 8M, max 148.4M, 140.4M free. Dec 16 13:58:40.010000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 13:58:40.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:40.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:40.620000 audit: BPF prog-id=14 op=UNLOAD Dec 16 13:58:40.620000 audit: BPF prog-id=13 op=UNLOAD Dec 16 13:58:40.621000 audit: BPF prog-id=15 op=LOAD Dec 16 13:58:40.625000 audit: BPF prog-id=16 op=LOAD Dec 16 13:58:40.632000 audit: BPF prog-id=17 op=LOAD Dec 16 13:58:40.740000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 13:58:40.740000 audit[1201]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffb3b93600 a2=4000 a3=0 items=0 ppid=1 pid=1201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:58:39.418619 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:58:39.433378 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 13:58:39.434284 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:58:40.740000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 13:58:40.803632 kernel: audit: type=1327 audit(1765893520.740:106): proctitle="/usr/lib/systemd/systemd-journald" Dec 16 13:58:40.831796 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:58:40.853817 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:58:40.880788 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:58:40.890806 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:58:40.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:40.923845 kernel: audit: type=1130 audit(1765893520.897:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:40.923719 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:58:40.933119 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:58:40.942083 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:58:40.951070 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:58:40.960060 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:58:40.969061 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:58:40.978273 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:58:40.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:40.989360 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:58:41.011795 kernel: audit: type=1130 audit(1765893520.987:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.021246 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:58:41.021505 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:58:41.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.032272 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:58:41.032537 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:58:41.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.043239 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:58:41.043485 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:58:41.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.052219 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:58:41.052466 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:58:41.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.063219 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:58:41.063463 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:58:41.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.072287 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:58:41.072532 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:58:41.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.083279 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:58:41.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.094381 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:58:41.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.106179 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:58:41.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.117375 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:58:41.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.129175 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:58:41.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.151523 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:58:41.162258 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 13:58:41.174210 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:58:41.191134 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:58:41.201985 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:58:41.202252 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:58:41.213642 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:58:41.226413 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:58:41.226739 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:58:41.228908 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:58:41.249311 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:58:41.261588 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:58:41.272226 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:58:41.284497 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:58:41.293555 systemd-journald[1201]: Time spent on flushing to /var/log/journal/940296a71dbd4815830f48066fd780ae is 52.807ms for 1095 entries. Dec 16 13:58:41.293555 systemd-journald[1201]: System Journal (/var/log/journal/940296a71dbd4815830f48066fd780ae) is 8M, max 588.1M, 580.1M free. Dec 16 13:58:41.381395 systemd-journald[1201]: Received client request to flush runtime journal. Dec 16 13:58:41.381489 kernel: loop1: detected capacity change from 0 to 49888 Dec 16 13:58:41.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.294102 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:58:41.314060 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:58:41.328095 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:58:41.341945 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:58:41.352305 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:58:41.362300 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:58:41.379992 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:58:41.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.391623 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:58:41.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.411126 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:58:41.424106 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:58:41.462801 kernel: loop2: detected capacity change from 0 to 50784 Dec 16 13:58:41.471222 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:58:41.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.483174 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:58:41.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.495000 audit: BPF prog-id=18 op=LOAD Dec 16 13:58:41.495000 audit: BPF prog-id=19 op=LOAD Dec 16 13:58:41.495000 audit: BPF prog-id=20 op=LOAD Dec 16 13:58:41.500035 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 13:58:41.509000 audit: BPF prog-id=21 op=LOAD Dec 16 13:58:41.515087 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:58:41.527238 kernel: loop3: detected capacity change from 0 to 224512 Dec 16 13:58:41.532995 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:58:41.544000 audit: BPF prog-id=22 op=LOAD Dec 16 13:58:41.545000 audit: BPF prog-id=23 op=LOAD Dec 16 13:58:41.545000 audit: BPF prog-id=24 op=LOAD Dec 16 13:58:41.555011 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 13:58:41.563000 audit: BPF prog-id=25 op=LOAD Dec 16 13:58:41.563000 audit: BPF prog-id=26 op=LOAD Dec 16 13:58:41.564000 audit: BPF prog-id=27 op=LOAD Dec 16 13:58:41.567979 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:58:41.612436 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Dec 16 13:58:41.612471 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Dec 16 13:58:41.640905 kernel: loop4: detected capacity change from 0 to 111560 Dec 16 13:58:41.638520 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:58:41.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.713112 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:58:41.717837 systemd-nsresourced[1255]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 13:58:41.728800 kernel: loop5: detected capacity change from 0 to 49888 Dec 16 13:58:41.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.735134 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 13:58:41.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:41.767802 kernel: loop6: detected capacity change from 0 to 50784 Dec 16 13:58:41.811799 kernel: loop7: detected capacity change from 0 to 224512 Dec 16 13:58:41.861787 kernel: loop1: detected capacity change from 0 to 111560 Dec 16 13:58:41.881352 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:58:41.908270 (sd-merge)[1266]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-gce.raw'. Dec 16 13:58:41.931813 (sd-merge)[1266]: Merged extensions into '/usr'. Dec 16 13:58:41.944179 systemd[1]: Reload requested from client PID 1236 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:58:41.944204 systemd[1]: Reloading... Dec 16 13:58:42.018375 systemd-oomd[1252]: No swap; memory pressure usage will be degraded Dec 16 13:58:42.076816 systemd-resolved[1253]: Positive Trust Anchors: Dec 16 13:58:42.078817 systemd-resolved[1253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:58:42.078967 systemd-resolved[1253]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 13:58:42.079128 systemd-resolved[1253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:58:42.104286 systemd-resolved[1253]: Defaulting to hostname 'linux'. Dec 16 13:58:42.105799 zram_generator::config[1302]: No configuration found. Dec 16 13:58:42.504017 systemd[1]: Reloading finished in 558 ms. Dec 16 13:58:42.536494 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 13:58:42.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:42.547154 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:58:42.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:42.556287 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:58:42.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:42.567282 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:58:42.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:42.582309 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:58:42.608244 systemd[1]: Starting ensure-sysext.service... Dec 16 13:58:42.620273 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:58:42.629000 audit: BPF prog-id=8 op=UNLOAD Dec 16 13:58:42.629000 audit: BPF prog-id=7 op=UNLOAD Dec 16 13:58:42.630000 audit: BPF prog-id=28 op=LOAD Dec 16 13:58:42.630000 audit: BPF prog-id=29 op=LOAD Dec 16 13:58:42.633074 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:58:42.647000 audit: BPF prog-id=30 op=LOAD Dec 16 13:58:42.647000 audit: BPF prog-id=25 op=UNLOAD Dec 16 13:58:42.647000 audit: BPF prog-id=31 op=LOAD Dec 16 13:58:42.647000 audit: BPF prog-id=32 op=LOAD Dec 16 13:58:42.647000 audit: BPF prog-id=26 op=UNLOAD Dec 16 13:58:42.647000 audit: BPF prog-id=27 op=UNLOAD Dec 16 13:58:42.649000 audit: BPF prog-id=33 op=LOAD Dec 16 13:58:42.649000 audit: BPF prog-id=22 op=UNLOAD Dec 16 13:58:42.649000 audit: BPF prog-id=34 op=LOAD Dec 16 13:58:42.649000 audit: BPF prog-id=35 op=LOAD Dec 16 13:58:42.649000 audit: BPF prog-id=23 op=UNLOAD Dec 16 13:58:42.649000 audit: BPF prog-id=24 op=UNLOAD Dec 16 13:58:42.651000 audit: BPF prog-id=36 op=LOAD Dec 16 13:58:42.651000 audit: BPF prog-id=21 op=UNLOAD Dec 16 13:58:42.653000 audit: BPF prog-id=37 op=LOAD Dec 16 13:58:42.653000 audit: BPF prog-id=15 op=UNLOAD Dec 16 13:58:42.653000 audit: BPF prog-id=38 op=LOAD Dec 16 13:58:42.653000 audit: BPF prog-id=39 op=LOAD Dec 16 13:58:42.653000 audit: BPF prog-id=16 op=UNLOAD Dec 16 13:58:42.653000 audit: BPF prog-id=17 op=UNLOAD Dec 16 13:58:42.655000 audit: BPF prog-id=40 op=LOAD Dec 16 13:58:42.655000 audit: BPF prog-id=18 op=UNLOAD Dec 16 13:58:42.655000 audit: BPF prog-id=41 op=LOAD Dec 16 13:58:42.655000 audit: BPF prog-id=42 op=LOAD Dec 16 13:58:42.655000 audit: BPF prog-id=19 op=UNLOAD Dec 16 13:58:42.655000 audit: BPF prog-id=20 op=UNLOAD Dec 16 13:58:42.658046 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:58:42.658096 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:58:42.658598 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:58:42.660692 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Dec 16 13:58:42.660857 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Dec 16 13:58:42.674907 systemd[1]: Reload requested from client PID 1346 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:58:42.674954 systemd[1]: Reloading... Dec 16 13:58:42.678520 systemd-tmpfiles[1347]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:58:42.678544 systemd-tmpfiles[1347]: Skipping /boot Dec 16 13:58:42.715308 systemd-tmpfiles[1347]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:58:42.717124 systemd-tmpfiles[1347]: Skipping /boot Dec 16 13:58:42.719544 systemd-udevd[1348]: Using default interface naming scheme 'v257'. Dec 16 13:58:42.827801 zram_generator::config[1377]: No configuration found. Dec 16 13:58:43.043793 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:58:43.071848 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 13:58:43.079794 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:58:43.096789 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Dec 16 13:58:43.103817 kernel: ACPI: button: Sleep Button [SLPF] Dec 16 13:58:43.246799 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 16 13:58:43.319815 kernel: EDAC MC: Ver: 3.0.0 Dec 16 13:58:43.579118 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:58:43.580224 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 16 13:58:43.590969 systemd[1]: Reloading finished in 915 ms. Dec 16 13:58:43.606725 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:58:43.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:43.617000 audit: BPF prog-id=43 op=LOAD Dec 16 13:58:43.617000 audit: BPF prog-id=44 op=LOAD Dec 16 13:58:43.617000 audit: BPF prog-id=28 op=UNLOAD Dec 16 13:58:43.617000 audit: BPF prog-id=29 op=UNLOAD Dec 16 13:58:43.619000 audit: BPF prog-id=45 op=LOAD Dec 16 13:58:43.619000 audit: BPF prog-id=36 op=UNLOAD Dec 16 13:58:43.620000 audit: BPF prog-id=46 op=LOAD Dec 16 13:58:43.620000 audit: BPF prog-id=40 op=UNLOAD Dec 16 13:58:43.620000 audit: BPF prog-id=47 op=LOAD Dec 16 13:58:43.620000 audit: BPF prog-id=48 op=LOAD Dec 16 13:58:43.620000 audit: BPF prog-id=41 op=UNLOAD Dec 16 13:58:43.620000 audit: BPF prog-id=42 op=UNLOAD Dec 16 13:58:43.622000 audit: BPF prog-id=49 op=LOAD Dec 16 13:58:43.622000 audit: BPF prog-id=33 op=UNLOAD Dec 16 13:58:43.622000 audit: BPF prog-id=50 op=LOAD Dec 16 13:58:43.622000 audit: BPF prog-id=51 op=LOAD Dec 16 13:58:43.622000 audit: BPF prog-id=34 op=UNLOAD Dec 16 13:58:43.622000 audit: BPF prog-id=35 op=UNLOAD Dec 16 13:58:43.624000 audit: BPF prog-id=52 op=LOAD Dec 16 13:58:43.624000 audit: BPF prog-id=37 op=UNLOAD Dec 16 13:58:43.624000 audit: BPF prog-id=53 op=LOAD Dec 16 13:58:43.624000 audit: BPF prog-id=54 op=LOAD Dec 16 13:58:43.624000 audit: BPF prog-id=38 op=UNLOAD Dec 16 13:58:43.624000 audit: BPF prog-id=39 op=UNLOAD Dec 16 13:58:43.626000 audit: BPF prog-id=55 op=LOAD Dec 16 13:58:43.632000 audit: BPF prog-id=30 op=UNLOAD Dec 16 13:58:43.632000 audit: BPF prog-id=56 op=LOAD Dec 16 13:58:43.632000 audit: BPF prog-id=57 op=LOAD Dec 16 13:58:43.632000 audit: BPF prog-id=31 op=UNLOAD Dec 16 13:58:43.632000 audit: BPF prog-id=32 op=UNLOAD Dec 16 13:58:43.639274 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:58:43.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:43.681244 systemd[1]: Finished ensure-sysext.service. Dec 16 13:58:43.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:58:43.718238 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Dec 16 13:58:43.727050 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:58:43.728809 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:58:43.745022 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:58:43.754189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:58:43.757663 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:58:43.771077 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:58:43.782916 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:58:43.796168 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:58:43.808089 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 13:58:43.816127 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:58:43.816381 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 13:58:43.820419 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:58:43.832180 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:58:43.837000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 13:58:43.837000 audit[1495]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff2045ac90 a2=420 a3=0 items=0 ppid=1467 pid=1495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:58:43.837000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:58:43.840480 augenrules[1495]: No rules Dec 16 13:58:43.842518 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:58:43.845656 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:58:43.865374 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:58:43.865707 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:58:43.869077 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:58:43.874318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:58:43.874525 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:58:43.879198 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:58:43.880103 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:58:43.909564 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:58:43.911051 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:58:43.920563 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:58:43.921728 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:58:43.931300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:58:43.932953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:58:43.947714 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:58:43.948100 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:58:43.959902 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:58:43.984090 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:58:43.984911 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:58:43.997805 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:58:44.004395 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:58:44.015422 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 13:58:44.020948 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Dec 16 13:58:44.089073 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:58:44.090073 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:58:44.100298 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Dec 16 13:58:44.134812 systemd-networkd[1502]: lo: Link UP Dec 16 13:58:44.134825 systemd-networkd[1502]: lo: Gained carrier Dec 16 13:58:44.138191 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:58:44.138996 systemd-networkd[1502]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:58:44.139016 systemd-networkd[1502]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:58:44.139653 systemd-networkd[1502]: eth0: Link UP Dec 16 13:58:44.140459 systemd[1]: Reached target network.target - Network. Dec 16 13:58:44.141972 systemd-networkd[1502]: eth0: Gained carrier Dec 16 13:58:44.142008 systemd-networkd[1502]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 13:58:44.143268 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:58:44.146089 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:58:44.151006 systemd-networkd[1502]: eth0: DHCPv4 address 10.128.0.79/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 16 13:58:44.270971 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:58:44.353589 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:58:44.756280 ldconfig[1492]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:58:44.762262 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:58:44.772675 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:58:44.800005 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:58:44.809129 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:58:44.818018 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:58:44.827900 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:58:44.837855 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:58:44.848036 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:58:44.856990 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:58:44.867041 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 13:58:44.877089 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 13:58:44.885922 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:58:44.895876 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:58:44.895935 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:58:44.904873 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:58:44.915314 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:58:44.925418 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:58:44.935072 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:58:44.945041 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:58:44.954873 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:58:44.974564 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:58:44.983294 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:58:44.994742 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:58:45.004940 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:58:45.013887 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:58:45.022939 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:58:45.022994 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:58:45.024426 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:58:45.045069 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:58:45.060152 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:58:45.071985 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:58:45.088531 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:58:45.106986 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:58:45.116891 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:58:45.119007 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:58:45.132154 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:58:45.134699 jq[1548]: false Dec 16 13:58:45.143917 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:58:45.151393 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Refreshing passwd entry cache Dec 16 13:58:45.154205 oslogin_cache_refresh[1551]: Refreshing passwd entry cache Dec 16 13:58:45.157049 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:58:45.165273 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Failure getting users, quitting Dec 16 13:58:45.165273 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:58:45.165273 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Refreshing group entry cache Dec 16 13:58:45.164676 oslogin_cache_refresh[1551]: Failure getting users, quitting Dec 16 13:58:45.164717 oslogin_cache_refresh[1551]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:58:45.164807 oslogin_cache_refresh[1551]: Refreshing group entry cache Dec 16 13:58:45.169161 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Failure getting groups, quitting Dec 16 13:58:45.169161 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:58:45.166823 oslogin_cache_refresh[1551]: Failure getting groups, quitting Dec 16 13:58:45.166843 oslogin_cache_refresh[1551]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:58:45.169671 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:58:45.183130 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:58:45.185746 extend-filesystems[1549]: Found /dev/sda6 Dec 16 13:58:45.203946 extend-filesystems[1549]: Found /dev/sda9 Dec 16 13:58:45.208080 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:58:45.217577 extend-filesystems[1549]: Checking size of /dev/sda9 Dec 16 13:58:45.218898 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 16 13:58:45.219759 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:58:45.221190 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:58:45.236758 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:58:45.250839 extend-filesystems[1549]: Resized partition /dev/sda9 Dec 16 13:58:45.255415 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:58:45.263878 jq[1576]: true Dec 16 13:58:45.269998 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:58:45.270873 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:58:45.271594 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:58:45.275032 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: ntpd 4.2.8p18@1.4062-o Mon Dec 15 23:46:33 UTC 2025 (1): Starting Dec 16 13:58:45.275032 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:58:45.275032 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: ---------------------------------------------------- Dec 16 13:58:45.275032 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:58:45.275032 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:58:45.275032 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: corporation. Support and training for ntp-4 are Dec 16 13:58:45.275032 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: available at https://www.nwtime.org/support Dec 16 13:58:45.275032 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: ---------------------------------------------------- Dec 16 13:58:45.271898 ntpd[1554]: ntpd 4.2.8p18@1.4062-o Mon Dec 15 23:46:33 UTC 2025 (1): Starting Dec 16 13:58:45.272832 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:58:45.271974 ntpd[1554]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:58:45.271991 ntpd[1554]: ---------------------------------------------------- Dec 16 13:58:45.272007 ntpd[1554]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:58:45.272022 ntpd[1554]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:58:45.272037 ntpd[1554]: corporation. Support and training for ntp-4 are Dec 16 13:58:45.272052 ntpd[1554]: available at https://www.nwtime.org/support Dec 16 13:58:45.272066 ntpd[1554]: ---------------------------------------------------- Dec 16 13:58:45.282299 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:58:45.285543 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: proto: precision = 0.100 usec (-23) Dec 16 13:58:45.284282 ntpd[1554]: proto: precision = 0.100 usec (-23) Dec 16 13:58:45.282629 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:58:45.288007 ntpd[1554]: basedate set to 2025-12-03 Dec 16 13:58:45.288910 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: basedate set to 2025-12-03 Dec 16 13:58:45.288910 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: gps base set to 2025-12-07 (week 2396) Dec 16 13:58:45.288910 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:58:45.288910 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:58:45.288035 ntpd[1554]: gps base set to 2025-12-07 (week 2396) Dec 16 13:58:45.288188 ntpd[1554]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:58:45.288226 ntpd[1554]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:58:45.293887 ntpd[1554]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:58:45.300299 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:58:45.300299 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: Listen normally on 3 eth0 10.128.0.79:123 Dec 16 13:58:45.300299 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: Listen normally on 4 lo [::1]:123 Dec 16 13:58:45.300299 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: bind(21) AF_INET6 [fe80::4001:aff:fe80:4f%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:58:45.300299 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:4f%2]:123 Dec 16 13:58:45.300299 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: cannot bind address fe80::4001:aff:fe80:4f%2 Dec 16 13:58:45.300299 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: Listening on routing socket on fd #21 for interface updates Dec 16 13:58:45.300592 extend-filesystems[1581]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:58:45.298886 ntpd[1554]: Listen normally on 3 eth0 10.128.0.79:123 Dec 16 13:58:45.309027 coreos-metadata[1545]: Dec 16 13:58:45.304 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Dec 16 13:58:45.309027 coreos-metadata[1545]: Dec 16 13:58:45.306 INFO Fetch successful Dec 16 13:58:45.309027 coreos-metadata[1545]: Dec 16 13:58:45.306 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Dec 16 13:58:45.309027 coreos-metadata[1545]: Dec 16 13:58:45.308 INFO Fetch successful Dec 16 13:58:45.309027 coreos-metadata[1545]: Dec 16 13:58:45.308 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Dec 16 13:58:45.302658 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:58:45.298933 ntpd[1554]: Listen normally on 4 lo [::1]:123 Dec 16 13:58:45.303036 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:58:45.319901 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2604027 blocks Dec 16 13:58:45.298984 ntpd[1554]: bind(21) AF_INET6 [fe80::4001:aff:fe80:4f%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:58:45.299014 ntpd[1554]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:4f%2]:123 Dec 16 13:58:45.299036 ntpd[1554]: cannot bind address fe80::4001:aff:fe80:4f%2 Dec 16 13:58:45.299076 ntpd[1554]: Listening on routing socket on fd #21 for interface updates Dec 16 13:58:45.329560 coreos-metadata[1545]: Dec 16 13:58:45.321 INFO Fetch successful Dec 16 13:58:45.329560 coreos-metadata[1545]: Dec 16 13:58:45.321 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Dec 16 13:58:45.329560 coreos-metadata[1545]: Dec 16 13:58:45.325 INFO Fetch successful Dec 16 13:58:45.331274 ntpd[1554]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:58:45.332361 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:58:45.332361 ntpd[1554]: 16 Dec 13:58:45 ntpd[1554]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:58:45.331336 ntpd[1554]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:58:45.405403 kernel: EXT4-fs (sda9): resized filesystem to 2604027 Dec 16 13:58:45.458653 update_engine[1573]: I20251216 13:58:45.412506 1573 main.cc:92] Flatcar Update Engine starting Dec 16 13:58:45.459116 jq[1585]: true Dec 16 13:58:45.467798 extend-filesystems[1581]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 13:58:45.467798 extend-filesystems[1581]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 13:58:45.467798 extend-filesystems[1581]: The filesystem on /dev/sda9 is now 2604027 (4k) blocks long. Dec 16 13:58:45.509913 extend-filesystems[1549]: Resized filesystem in /dev/sda9 Dec 16 13:58:45.469365 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:58:45.471861 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:58:45.539935 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:58:45.551304 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:58:45.587958 tar[1582]: linux-amd64/LICENSE Dec 16 13:58:45.587958 tar[1582]: linux-amd64/helm Dec 16 13:58:45.640063 systemd-networkd[1502]: eth0: Gained IPv6LL Dec 16 13:58:45.662969 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:58:45.672552 bash[1628]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:58:45.673419 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:58:45.720324 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:58:45.730631 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:58:45.733034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:58:45.751117 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:58:45.764994 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Dec 16 13:58:45.776804 dbus-daemon[1546]: [system] SELinux support is enabled Dec 16 13:58:45.780924 systemd[1]: Starting sshkeys.service... Dec 16 13:58:45.787145 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:58:45.800566 systemd-logind[1569]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:58:45.800611 systemd-logind[1569]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 16 13:58:45.800645 systemd-logind[1569]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:58:45.801088 systemd-logind[1569]: New seat seat0. Dec 16 13:58:45.805616 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:58:45.811410 update_engine[1573]: I20251216 13:58:45.809975 1573 update_check_scheduler.cc:74] Next update check in 5m29s Dec 16 13:58:45.810246 dbus-daemon[1546]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1502 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 13:58:45.821500 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:58:45.875429 init.sh[1639]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 16 13:58:45.875886 init.sh[1639]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 16 13:58:45.878501 init.sh[1639]: + /usr/bin/google_instance_setup Dec 16 13:58:45.896477 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:58:45.908365 dbus-daemon[1546]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:58:45.909111 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:58:45.923864 systemd[1]: Started sshd@0-10.128.0.79:22-139.178.68.195:34586.service - OpenSSH per-connection server daemon (139.178.68.195:34586). Dec 16 13:58:45.934411 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:58:45.934670 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:58:45.945030 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:58:45.945250 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:58:45.968077 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:58:45.995244 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 13:58:46.010116 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 13:58:46.027129 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 13:58:46.041708 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:58:46.054632 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:58:46.066568 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:58:46.070115 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:58:46.099218 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:58:46.192709 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:58:46.216836 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:58:46.235445 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:58:46.245367 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:58:46.274443 coreos-metadata[1661]: Dec 16 13:58:46.274 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 16 13:58:46.277660 coreos-metadata[1661]: Dec 16 13:58:46.277 INFO Fetch failed with 404: resource not found Dec 16 13:58:46.277660 coreos-metadata[1661]: Dec 16 13:58:46.277 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 16 13:58:46.295954 coreos-metadata[1661]: Dec 16 13:58:46.285 INFO Fetch successful Dec 16 13:58:46.295954 coreos-metadata[1661]: Dec 16 13:58:46.285 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 16 13:58:46.295954 coreos-metadata[1661]: Dec 16 13:58:46.285 INFO Fetch failed with 404: resource not found Dec 16 13:58:46.295954 coreos-metadata[1661]: Dec 16 13:58:46.285 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 16 13:58:46.295954 coreos-metadata[1661]: Dec 16 13:58:46.292 INFO Fetch failed with 404: resource not found Dec 16 13:58:46.295954 coreos-metadata[1661]: Dec 16 13:58:46.293 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 16 13:58:46.295954 coreos-metadata[1661]: Dec 16 13:58:46.294 INFO Fetch successful Dec 16 13:58:46.303760 unknown[1661]: wrote ssh authorized keys file for user: core Dec 16 13:58:46.384581 update-ssh-keys[1675]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:58:46.383550 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 13:58:46.391590 locksmithd[1665]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:58:46.400319 systemd[1]: Finished sshkeys.service. Dec 16 13:58:46.414267 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 13:58:46.417357 dbus-daemon[1546]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 13:58:46.422149 dbus-daemon[1546]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1663 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 13:58:46.437164 containerd[1603]: time="2025-12-16T13:58:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:58:46.439792 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 13:58:46.443261 containerd[1603]: time="2025-12-16T13:58:46.442394217Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 13:58:46.553902 containerd[1603]: time="2025-12-16T13:58:46.552479607Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.684µs" Dec 16 13:58:46.553902 containerd[1603]: time="2025-12-16T13:58:46.552536353Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:58:46.553902 containerd[1603]: time="2025-12-16T13:58:46.552595922Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:58:46.553902 containerd[1603]: time="2025-12-16T13:58:46.552617316Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:58:46.553902 containerd[1603]: time="2025-12-16T13:58:46.553746593Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:58:46.553902 containerd[1603]: time="2025-12-16T13:58:46.553817980Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:58:46.555193 containerd[1603]: time="2025-12-16T13:58:46.553940432Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:58:46.555193 containerd[1603]: time="2025-12-16T13:58:46.553960704Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:58:46.555193 containerd[1603]: time="2025-12-16T13:58:46.554789424Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:58:46.555193 containerd[1603]: time="2025-12-16T13:58:46.554824996Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:58:46.555193 containerd[1603]: time="2025-12-16T13:58:46.554861351Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:58:46.555193 containerd[1603]: time="2025-12-16T13:58:46.554878989Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 13:58:46.555483 containerd[1603]: time="2025-12-16T13:58:46.555198481Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 13:58:46.555483 containerd[1603]: time="2025-12-16T13:58:46.555233713Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:58:46.555483 containerd[1603]: time="2025-12-16T13:58:46.555405313Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:58:46.557379 containerd[1603]: time="2025-12-16T13:58:46.555759310Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:58:46.557379 containerd[1603]: time="2025-12-16T13:58:46.555877004Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:58:46.557379 containerd[1603]: time="2025-12-16T13:58:46.555907719Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:58:46.557379 containerd[1603]: time="2025-12-16T13:58:46.555973848Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:58:46.557379 containerd[1603]: time="2025-12-16T13:58:46.556442986Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:58:46.557379 containerd[1603]: time="2025-12-16T13:58:46.556611762Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:58:46.573229 containerd[1603]: time="2025-12-16T13:58:46.572348665Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:58:46.573229 containerd[1603]: time="2025-12-16T13:58:46.572700115Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 13:58:46.575809 containerd[1603]: time="2025-12-16T13:58:46.575116734Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 13:58:46.575809 containerd[1603]: time="2025-12-16T13:58:46.575202208Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:58:46.575809 containerd[1603]: time="2025-12-16T13:58:46.575239482Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:58:46.575809 containerd[1603]: time="2025-12-16T13:58:46.575284615Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:58:46.575809 containerd[1603]: time="2025-12-16T13:58:46.575307823Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:58:46.575809 containerd[1603]: time="2025-12-16T13:58:46.575327621Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:58:46.575809 containerd[1603]: time="2025-12-16T13:58:46.575381158Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:58:46.575809 containerd[1603]: time="2025-12-16T13:58:46.575414244Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:58:46.575809 containerd[1603]: time="2025-12-16T13:58:46.575476938Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:58:46.575809 containerd[1603]: time="2025-12-16T13:58:46.575500383Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:58:46.575809 containerd[1603]: time="2025-12-16T13:58:46.575578957Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:58:46.575809 containerd[1603]: time="2025-12-16T13:58:46.575605172Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:58:46.579068 containerd[1603]: time="2025-12-16T13:58:46.577086160Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:58:46.579068 containerd[1603]: time="2025-12-16T13:58:46.578274463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:58:46.579068 containerd[1603]: time="2025-12-16T13:58:46.578332258Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:58:46.579068 containerd[1603]: time="2025-12-16T13:58:46.578355309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:58:46.579068 containerd[1603]: time="2025-12-16T13:58:46.578400744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:58:46.579068 containerd[1603]: time="2025-12-16T13:58:46.578440834Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:58:46.579068 containerd[1603]: time="2025-12-16T13:58:46.578507193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:58:46.579068 containerd[1603]: time="2025-12-16T13:58:46.578530589Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:58:46.579068 containerd[1603]: time="2025-12-16T13:58:46.578578388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:58:46.579068 containerd[1603]: time="2025-12-16T13:58:46.578599039Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:58:46.579068 containerd[1603]: time="2025-12-16T13:58:46.578639513Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:58:46.579068 containerd[1603]: time="2025-12-16T13:58:46.578724632Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:58:46.579068 containerd[1603]: time="2025-12-16T13:58:46.578877531Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:58:46.579068 containerd[1603]: time="2025-12-16T13:58:46.578904769Z" level=info msg="Start snapshots syncer" Dec 16 13:58:46.579068 containerd[1603]: time="2025-12-16T13:58:46.578984592Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:58:46.582611 containerd[1603]: time="2025-12-16T13:58:46.581824648Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:58:46.585521 containerd[1603]: time="2025-12-16T13:58:46.582854757Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:58:46.585521 containerd[1603]: time="2025-12-16T13:58:46.582975156Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:58:46.586435 containerd[1603]: time="2025-12-16T13:58:46.585814057Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:58:46.586435 containerd[1603]: time="2025-12-16T13:58:46.585896257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:58:46.586435 containerd[1603]: time="2025-12-16T13:58:46.586033775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:58:46.586435 containerd[1603]: time="2025-12-16T13:58:46.586056502Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:58:46.586435 containerd[1603]: time="2025-12-16T13:58:46.586112073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:58:46.586435 containerd[1603]: time="2025-12-16T13:58:46.586150953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:58:46.586435 containerd[1603]: time="2025-12-16T13:58:46.586210979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:58:46.586435 containerd[1603]: time="2025-12-16T13:58:46.586272164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:58:46.586435 containerd[1603]: time="2025-12-16T13:58:46.586295523Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:58:46.587439 containerd[1603]: time="2025-12-16T13:58:46.587211388Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:58:46.587812 containerd[1603]: time="2025-12-16T13:58:46.587281186Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:58:46.588246 containerd[1603]: time="2025-12-16T13:58:46.588197652Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:58:46.588598 containerd[1603]: time="2025-12-16T13:58:46.588555751Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:58:46.591615 containerd[1603]: time="2025-12-16T13:58:46.591568533Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:58:46.592045 containerd[1603]: time="2025-12-16T13:58:46.592012530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:58:46.592301 containerd[1603]: time="2025-12-16T13:58:46.592273154Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:58:46.595433 containerd[1603]: time="2025-12-16T13:58:46.594729154Z" level=info msg="runtime interface created" Dec 16 13:58:46.595433 containerd[1603]: time="2025-12-16T13:58:46.594788429Z" level=info msg="created NRI interface" Dec 16 13:58:46.595433 containerd[1603]: time="2025-12-16T13:58:46.594843742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:58:46.595433 containerd[1603]: time="2025-12-16T13:58:46.594877043Z" level=info msg="Connect containerd service" Dec 16 13:58:46.595433 containerd[1603]: time="2025-12-16T13:58:46.594938237Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:58:46.602584 containerd[1603]: time="2025-12-16T13:58:46.601856430Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:58:46.666903 sshd[1656]: Accepted publickey for core from 139.178.68.195 port 34586 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 13:58:46.681985 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:58:46.714065 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:58:46.724726 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:58:46.772232 systemd-logind[1569]: New session 1 of user core. Dec 16 13:58:46.797358 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:58:46.815213 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:58:46.868466 (systemd)[1698]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:58:46.878478 systemd-logind[1569]: New session 2 of user core. Dec 16 13:58:46.939419 polkitd[1684]: Started polkitd version 126 Dec 16 13:58:46.962295 polkitd[1684]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 13:58:46.965254 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 13:58:46.963000 polkitd[1684]: Loading rules from directory /run/polkit-1/rules.d Dec 16 13:58:46.963083 polkitd[1684]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:58:46.963653 polkitd[1684]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 13:58:46.963697 polkitd[1684]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:58:46.963799 polkitd[1684]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 13:58:46.964576 polkitd[1684]: Finished loading, compiling and executing 2 rules Dec 16 13:58:46.966137 dbus-daemon[1546]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 13:58:46.966936 polkitd[1684]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 13:58:47.046418 systemd-hostnamed[1663]: Hostname set to (transient) Dec 16 13:58:47.047329 systemd-resolved[1253]: System hostname changed to 'ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal'. Dec 16 13:58:47.081859 containerd[1603]: time="2025-12-16T13:58:47.076864085Z" level=info msg="Start subscribing containerd event" Dec 16 13:58:47.081859 containerd[1603]: time="2025-12-16T13:58:47.078339287Z" level=info msg="Start recovering state" Dec 16 13:58:47.081859 containerd[1603]: time="2025-12-16T13:58:47.078748268Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:58:47.081859 containerd[1603]: time="2025-12-16T13:58:47.079332828Z" level=info msg="Start event monitor" Dec 16 13:58:47.081859 containerd[1603]: time="2025-12-16T13:58:47.081474031Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:58:47.081859 containerd[1603]: time="2025-12-16T13:58:47.081607669Z" level=info msg="Start streaming server" Dec 16 13:58:47.081859 containerd[1603]: time="2025-12-16T13:58:47.081631248Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:58:47.081859 containerd[1603]: time="2025-12-16T13:58:47.081645906Z" level=info msg="runtime interface starting up..." Dec 16 13:58:47.081859 containerd[1603]: time="2025-12-16T13:58:47.081659875Z" level=info msg="starting plugins..." Dec 16 13:58:47.081859 containerd[1603]: time="2025-12-16T13:58:47.081821788Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:58:47.082338 containerd[1603]: time="2025-12-16T13:58:47.081870572Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:58:47.086741 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:58:47.092942 containerd[1603]: time="2025-12-16T13:58:47.086927409Z" level=info msg="containerd successfully booted in 0.659497s" Dec 16 13:58:47.287996 systemd[1698]: Queued start job for default target default.target. Dec 16 13:58:47.296367 systemd[1698]: Created slice app.slice - User Application Slice. Dec 16 13:58:47.296422 systemd[1698]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 13:58:47.296451 systemd[1698]: Reached target paths.target - Paths. Dec 16 13:58:47.296545 systemd[1698]: Reached target timers.target - Timers. Dec 16 13:58:47.302229 systemd[1698]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:58:47.305149 systemd[1698]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 13:58:47.345201 systemd[1698]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:58:47.345374 systemd[1698]: Reached target sockets.target - Sockets. Dec 16 13:58:47.351952 systemd[1698]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 13:58:47.352158 systemd[1698]: Reached target basic.target - Basic System. Dec 16 13:58:47.352497 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:58:47.352731 systemd[1698]: Reached target default.target - Main User Target. Dec 16 13:58:47.352860 systemd[1698]: Startup finished in 446ms. Dec 16 13:58:47.371180 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:58:47.445923 instance-setup[1652]: INFO Running google_set_multiqueue. Dec 16 13:58:47.505035 instance-setup[1652]: INFO Set channels for eth0 to 2. Dec 16 13:58:47.513132 tar[1582]: linux-amd64/README.md Dec 16 13:58:47.527096 instance-setup[1652]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 16 13:58:47.542610 instance-setup[1652]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 16 13:58:47.547589 systemd[1]: Started sshd@1-10.128.0.79:22-139.178.68.195:34596.service - OpenSSH per-connection server daemon (139.178.68.195:34596). Dec 16 13:58:47.550461 instance-setup[1652]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 16 13:58:47.560153 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:58:47.560604 instance-setup[1652]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 16 13:58:47.560853 instance-setup[1652]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 16 13:58:47.564850 instance-setup[1652]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 16 13:58:47.566898 instance-setup[1652]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 16 13:58:47.570545 instance-setup[1652]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 16 13:58:47.591247 instance-setup[1652]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 16 13:58:47.601318 instance-setup[1652]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 16 13:58:47.605488 instance-setup[1652]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 16 13:58:47.605554 instance-setup[1652]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 16 13:58:47.636582 init.sh[1639]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 16 13:58:47.812658 startup-script[1760]: INFO Starting startup scripts. Dec 16 13:58:47.818003 startup-script[1760]: INFO No startup scripts found in metadata. Dec 16 13:58:47.818079 startup-script[1760]: INFO Finished running startup scripts. Dec 16 13:58:47.841021 init.sh[1639]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 16 13:58:47.841021 init.sh[1639]: + daemon_pids=() Dec 16 13:58:47.841181 init.sh[1639]: + for d in accounts clock_skew network Dec 16 13:58:47.842028 init.sh[1639]: + daemon_pids+=($!) Dec 16 13:58:47.842028 init.sh[1639]: + for d in accounts clock_skew network Dec 16 13:58:47.842028 init.sh[1639]: + daemon_pids+=($!) Dec 16 13:58:47.842028 init.sh[1639]: + for d in accounts clock_skew network Dec 16 13:58:47.842028 init.sh[1639]: + daemon_pids+=($!) Dec 16 13:58:47.842028 init.sh[1639]: + NOTIFY_SOCKET=/run/systemd/notify Dec 16 13:58:47.842028 init.sh[1639]: + /usr/bin/systemd-notify --ready Dec 16 13:58:47.842559 init.sh[1763]: + /usr/bin/google_accounts_daemon Dec 16 13:58:47.844162 init.sh[1764]: + /usr/bin/google_clock_skew_daemon Dec 16 13:58:47.844483 init.sh[1765]: + /usr/bin/google_network_daemon Dec 16 13:58:47.860257 systemd[1]: Started oem-gce.service - GCE Linux Agent. Dec 16 13:58:47.875588 init.sh[1639]: + wait -n 1763 1764 1765 Dec 16 13:58:47.904137 sshd[1744]: Accepted publickey for core from 139.178.68.195 port 34596 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 13:58:47.907070 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:58:47.923095 systemd-logind[1569]: New session 3 of user core. Dec 16 13:58:47.928132 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:58:48.056803 sshd[1769]: Connection closed by 139.178.68.195 port 34596 Dec 16 13:58:48.057039 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Dec 16 13:58:48.066373 systemd[1]: sshd@1-10.128.0.79:22-139.178.68.195:34596.service: Deactivated successfully. Dec 16 13:58:48.072371 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:58:48.078001 systemd-logind[1569]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:58:48.080743 systemd-logind[1569]: Removed session 3. Dec 16 13:58:48.116247 systemd[1]: Started sshd@2-10.128.0.79:22-139.178.68.195:34612.service - OpenSSH per-connection server daemon (139.178.68.195:34612). Dec 16 13:58:48.274300 ntpd[1554]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:4f%2]:123 Dec 16 13:58:48.275128 ntpd[1554]: 16 Dec 13:58:48 ntpd[1554]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:4f%2]:123 Dec 16 13:58:48.335965 google-networking[1765]: INFO Starting Google Networking daemon. Dec 16 13:58:48.352504 google-clock-skew[1764]: INFO Starting Google Clock Skew daemon. Dec 16 13:58:48.360205 google-clock-skew[1764]: INFO Clock drift token has changed: 0. Dec 16 13:58:48.378423 groupadd[1784]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 16 13:58:48.382485 groupadd[1784]: group added to /etc/gshadow: name=google-sudoers Dec 16 13:58:48.471807 sshd[1777]: Accepted publickey for core from 139.178.68.195 port 34612 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 13:58:48.474094 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:58:48.484635 systemd-logind[1569]: New session 4 of user core. Dec 16 13:58:48.490254 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:58:48.570322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:58:48.582000 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:58:48.585910 groupadd[1784]: new group: name=google-sudoers, GID=1000 Dec 16 13:58:48.589600 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:58:48.593089 systemd[1]: Startup finished in 2.860s (kernel) + 15.816s (initrd) + 10.333s (userspace) = 29.011s. Dec 16 13:58:48.626907 sshd[1791]: Connection closed by 139.178.68.195 port 34612 Dec 16 13:58:48.628275 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Dec 16 13:58:48.644749 systemd[1]: sshd@2-10.128.0.79:22-139.178.68.195:34612.service: Deactivated successfully. Dec 16 13:58:48.652269 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:58:48.656173 systemd-logind[1569]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:58:48.664260 systemd-logind[1569]: Removed session 4. Dec 16 13:58:48.680427 google-accounts[1763]: INFO Starting Google Accounts daemon. Dec 16 13:58:48.701276 google-accounts[1763]: WARNING OS Login not installed. Dec 16 13:58:48.706101 google-accounts[1763]: INFO Creating a new user account for 0. Dec 16 13:58:48.715482 init.sh[1808]: useradd: invalid user name '0': use --badname to ignore Dec 16 13:58:48.716142 google-accounts[1763]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 16 13:58:49.000286 systemd-resolved[1253]: Clock change detected. Flushing caches. Dec 16 13:58:49.001086 google-clock-skew[1764]: INFO Synced system time with hardware clock. Dec 16 13:58:49.473868 kubelet[1798]: E1216 13:58:49.473792 1798 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:58:49.476894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:58:49.477158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:58:49.477842 systemd[1]: kubelet.service: Consumed 1.311s CPU time, 266.8M memory peak. Dec 16 13:58:58.666415 systemd[1]: Started sshd@3-10.128.0.79:22-139.178.68.195:51374.service - OpenSSH per-connection server daemon (139.178.68.195:51374). Dec 16 13:58:58.945670 sshd[1819]: Accepted publickey for core from 139.178.68.195 port 51374 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 13:58:58.947483 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:58:58.954819 systemd-logind[1569]: New session 5 of user core. Dec 16 13:58:58.964031 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:58:59.079701 sshd[1823]: Connection closed by 139.178.68.195 port 51374 Dec 16 13:58:59.080708 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Dec 16 13:58:59.086919 systemd[1]: sshd@3-10.128.0.79:22-139.178.68.195:51374.service: Deactivated successfully. Dec 16 13:58:59.089318 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:58:59.090682 systemd-logind[1569]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:58:59.092569 systemd-logind[1569]: Removed session 5. Dec 16 13:58:59.136045 systemd[1]: Started sshd@4-10.128.0.79:22-139.178.68.195:51378.service - OpenSSH per-connection server daemon (139.178.68.195:51378). Dec 16 13:58:59.432531 sshd[1829]: Accepted publickey for core from 139.178.68.195 port 51378 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 13:58:59.434293 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:58:59.441834 systemd-logind[1569]: New session 6 of user core. Dec 16 13:58:59.449064 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:58:59.512273 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:58:59.514727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:58:59.563911 sshd[1833]: Connection closed by 139.178.68.195 port 51378 Dec 16 13:58:59.564702 sshd-session[1829]: pam_unix(sshd:session): session closed for user core Dec 16 13:58:59.571344 systemd[1]: sshd@4-10.128.0.79:22-139.178.68.195:51378.service: Deactivated successfully. Dec 16 13:58:59.573727 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:58:59.574939 systemd-logind[1569]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:58:59.578479 systemd-logind[1569]: Removed session 6. Dec 16 13:58:59.618224 systemd[1]: Started sshd@5-10.128.0.79:22-139.178.68.195:51384.service - OpenSSH per-connection server daemon (139.178.68.195:51384). Dec 16 13:58:59.887486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:58:59.897811 sshd[1842]: Accepted publickey for core from 139.178.68.195 port 51384 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 13:58:59.899509 sshd-session[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:58:59.902207 (kubelet)[1850]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:58:59.909187 systemd-logind[1569]: New session 7 of user core. Dec 16 13:58:59.916012 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:58:59.965078 kubelet[1850]: E1216 13:58:59.965032 1850 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:58:59.969805 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:58:59.969996 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:58:59.970531 systemd[1]: kubelet.service: Consumed 214ms CPU time, 109.7M memory peak. Dec 16 13:59:00.029411 sshd[1856]: Connection closed by 139.178.68.195 port 51384 Dec 16 13:59:00.030264 sshd-session[1842]: pam_unix(sshd:session): session closed for user core Dec 16 13:59:00.036601 systemd[1]: sshd@5-10.128.0.79:22-139.178.68.195:51384.service: Deactivated successfully. Dec 16 13:59:00.038934 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:59:00.040315 systemd-logind[1569]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:59:00.042329 systemd-logind[1569]: Removed session 7. Dec 16 13:59:00.086418 systemd[1]: Started sshd@6-10.128.0.79:22-139.178.68.195:51390.service - OpenSSH per-connection server daemon (139.178.68.195:51390). Dec 16 13:59:00.360784 sshd[1864]: Accepted publickey for core from 139.178.68.195 port 51390 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 13:59:00.362530 sshd-session[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:59:00.370211 systemd-logind[1569]: New session 8 of user core. Dec 16 13:59:00.377013 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:59:00.477992 sudo[1869]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:59:00.478586 sudo[1869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:59:00.492099 sudo[1869]: pam_unix(sudo:session): session closed for user root Dec 16 13:59:00.534775 sshd[1868]: Connection closed by 139.178.68.195 port 51390 Dec 16 13:59:00.535972 sshd-session[1864]: pam_unix(sshd:session): session closed for user core Dec 16 13:59:00.543196 systemd[1]: sshd@6-10.128.0.79:22-139.178.68.195:51390.service: Deactivated successfully. Dec 16 13:59:00.545940 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:59:00.547380 systemd-logind[1569]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:59:00.549620 systemd-logind[1569]: Removed session 8. Dec 16 13:59:00.600149 systemd[1]: Started sshd@7-10.128.0.79:22-139.178.68.195:52090.service - OpenSSH per-connection server daemon (139.178.68.195:52090). Dec 16 13:59:00.883598 sshd[1876]: Accepted publickey for core from 139.178.68.195 port 52090 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 13:59:00.885345 sshd-session[1876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:59:00.892819 systemd-logind[1569]: New session 9 of user core. Dec 16 13:59:00.902977 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:59:00.983580 sudo[1882]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:59:00.984204 sudo[1882]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:59:00.987394 sudo[1882]: pam_unix(sudo:session): session closed for user root Dec 16 13:59:01.001941 sudo[1881]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:59:01.002445 sudo[1881]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:59:01.012328 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:59:01.075794 kernel: kauditd_printk_skb: 106 callbacks suppressed Dec 16 13:59:01.075939 kernel: audit: type=1305 audit(1765893541.068:213): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 13:59:01.068000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 13:59:01.076128 augenrules[1906]: No rules Dec 16 13:59:01.078794 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:59:01.079180 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:59:01.085152 sudo[1881]: pam_unix(sudo:session): session closed for user root Dec 16 13:59:01.068000 audit[1906]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc2e4732d0 a2=420 a3=0 items=0 ppid=1887 pid=1906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:01.120075 kernel: audit: type=1300 audit(1765893541.068:213): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc2e4732d0 a2=420 a3=0 items=0 ppid=1887 pid=1906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:01.120183 kernel: audit: type=1327 audit(1765893541.068:213): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:59:01.068000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 13:59:01.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:01.143723 systemd-logind[1569]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:59:01.135030 sshd-session[1876]: pam_unix(sshd:session): session closed for user core Dec 16 13:59:01.155653 kernel: audit: type=1130 audit(1765893541.079:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:01.155726 kernel: audit: type=1131 audit(1765893541.079:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:01.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:01.156407 sshd[1880]: Connection closed by 139.178.68.195 port 52090 Dec 16 13:59:01.145168 systemd[1]: sshd@7-10.128.0.79:22-139.178.68.195:52090.service: Deactivated successfully. Dec 16 13:59:01.149071 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:59:01.153712 systemd-logind[1569]: Removed session 9. Dec 16 13:59:01.084000 audit[1881]: USER_END pid=1881 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:59:01.177776 kernel: audit: type=1106 audit(1765893541.084:216): pid=1881 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:59:01.084000 audit[1881]: CRED_DISP pid=1881 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:59:01.201776 kernel: audit: type=1104 audit(1765893541.084:217): pid=1881 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:59:01.139000 audit[1876]: USER_END pid=1876 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:59:01.258637 kernel: audit: type=1106 audit(1765893541.139:218): pid=1876 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:59:01.139000 audit[1876]: CRED_DISP pid=1876 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:59:01.269201 systemd[1]: Started sshd@8-10.128.0.79:22-139.178.68.195:52094.service - OpenSSH per-connection server daemon (139.178.68.195:52094). Dec 16 13:59:01.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.79:22-139.178.68.195:52090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:01.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.79:22-139.178.68.195:52094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:01.284780 kernel: audit: type=1104 audit(1765893541.139:219): pid=1876 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:59:01.284835 kernel: audit: type=1131 audit(1765893541.144:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.79:22-139.178.68.195:52090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:01.568000 audit[1916]: USER_ACCT pid=1916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:59:01.569706 sshd[1916]: Accepted publickey for core from 139.178.68.195 port 52094 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 13:59:01.570000 audit[1916]: CRED_ACQ pid=1916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:59:01.570000 audit[1916]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe95b66570 a2=3 a3=0 items=0 ppid=1 pid=1916 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:01.570000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 13:59:01.571621 sshd-session[1916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:59:01.579524 systemd-logind[1569]: New session 10 of user core. Dec 16 13:59:01.585063 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:59:01.588000 audit[1916]: USER_START pid=1916 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:59:01.591000 audit[1920]: CRED_ACQ pid=1920 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:59:01.667000 audit[1921]: USER_ACCT pid=1921 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:59:01.668207 sudo[1921]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:59:01.667000 audit[1921]: CRED_REFR pid=1921 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:59:01.668791 sudo[1921]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:59:01.668000 audit[1921]: USER_START pid=1921 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:59:02.189821 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:59:02.219270 (dockerd)[1939]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:59:02.589075 dockerd[1939]: time="2025-12-16T13:59:02.588760833Z" level=info msg="Starting up" Dec 16 13:59:02.590218 dockerd[1939]: time="2025-12-16T13:59:02.590179332Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:59:02.606123 dockerd[1939]: time="2025-12-16T13:59:02.606075028Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:59:02.628352 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4079522332-merged.mount: Deactivated successfully. Dec 16 13:59:02.668436 dockerd[1939]: time="2025-12-16T13:59:02.668192169Z" level=info msg="Loading containers: start." Dec 16 13:59:02.686769 kernel: Initializing XFRM netlink socket Dec 16 13:59:02.770000 audit[1987]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.770000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffec03ffc40 a2=0 a3=0 items=0 ppid=1939 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.770000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 13:59:02.774000 audit[1989]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.774000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdbcf29c20 a2=0 a3=0 items=0 ppid=1939 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.774000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 13:59:02.777000 audit[1991]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.777000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc299fcf40 a2=0 a3=0 items=0 ppid=1939 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.777000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 13:59:02.780000 audit[1993]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.780000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb416d4c0 a2=0 a3=0 items=0 ppid=1939 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.780000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 13:59:02.783000 audit[1995]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.783000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdb32e2700 a2=0 a3=0 items=0 ppid=1939 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.783000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 13:59:02.786000 audit[1997]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.786000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdfe28a500 a2=0 a3=0 items=0 ppid=1939 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.786000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:59:02.789000 audit[1999]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.789000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc725d5170 a2=0 a3=0 items=0 ppid=1939 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.789000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:59:02.793000 audit[2001]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.793000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff7ec8e000 a2=0 a3=0 items=0 ppid=1939 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.793000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 13:59:02.832000 audit[2004]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.832000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fffa8fe4240 a2=0 a3=0 items=0 ppid=1939 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.832000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 13:59:02.836000 audit[2006]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.836000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff683fbea0 a2=0 a3=0 items=0 ppid=1939 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.836000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 13:59:02.839000 audit[2008]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.839000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffca92af020 a2=0 a3=0 items=0 ppid=1939 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.839000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 13:59:02.843000 audit[2010]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.843000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc42b7f3f0 a2=0 a3=0 items=0 ppid=1939 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.843000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:59:02.847000 audit[2012]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.847000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffb5cc7b40 a2=0 a3=0 items=0 ppid=1939 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.847000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 13:59:02.908000 audit[2042]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.908000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc5c2e7cf0 a2=0 a3=0 items=0 ppid=1939 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.908000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 13:59:02.911000 audit[2044]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.911000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff0b1b4590 a2=0 a3=0 items=0 ppid=1939 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.911000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 13:59:02.914000 audit[2046]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.914000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7bb9f820 a2=0 a3=0 items=0 ppid=1939 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.914000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 13:59:02.917000 audit[2048]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2048 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.917000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde5a43de0 a2=0 a3=0 items=0 ppid=1939 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.917000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 13:59:02.920000 audit[2050]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.920000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd4ab34c10 a2=0 a3=0 items=0 ppid=1939 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.920000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 13:59:02.923000 audit[2052]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.923000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff98105000 a2=0 a3=0 items=0 ppid=1939 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.923000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:59:02.926000 audit[2054]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.926000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd58542ee0 a2=0 a3=0 items=0 ppid=1939 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.926000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:59:02.929000 audit[2056]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.929000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc2d21e500 a2=0 a3=0 items=0 ppid=1939 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.929000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 13:59:02.933000 audit[2058]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.933000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fffeb5a9c90 a2=0 a3=0 items=0 ppid=1939 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.933000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 13:59:02.936000 audit[2060]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.936000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd9e9b31e0 a2=0 a3=0 items=0 ppid=1939 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.936000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 13:59:02.940000 audit[2062]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.940000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffdfa03ba70 a2=0 a3=0 items=0 ppid=1939 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.940000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 13:59:02.943000 audit[2064]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.943000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe54f304d0 a2=0 a3=0 items=0 ppid=1939 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.943000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 13:59:02.947000 audit[2066]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2066 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.947000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd35dc4090 a2=0 a3=0 items=0 ppid=1939 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.947000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 13:59:02.954000 audit[2071]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.954000 audit[2071]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe19ccfe10 a2=0 a3=0 items=0 ppid=1939 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.954000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 13:59:02.957000 audit[2073]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.957000 audit[2073]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcf0a7df20 a2=0 a3=0 items=0 ppid=1939 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.957000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 13:59:02.960000 audit[2075]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.960000 audit[2075]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc8459c420 a2=0 a3=0 items=0 ppid=1939 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.960000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 13:59:02.963000 audit[2077]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.963000 audit[2077]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffb5524320 a2=0 a3=0 items=0 ppid=1939 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.963000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 13:59:02.966000 audit[2079]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.966000 audit[2079]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffed10b8b20 a2=0 a3=0 items=0 ppid=1939 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.966000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 13:59:02.970000 audit[2081]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:02.970000 audit[2081]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd7f88ff40 a2=0 a3=0 items=0 ppid=1939 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.970000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 13:59:02.992000 audit[2086]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.992000 audit[2086]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffffc9b4210 a2=0 a3=0 items=0 ppid=1939 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.992000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 13:59:02.996000 audit[2088]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:02.996000 audit[2088]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd2004c200 a2=0 a3=0 items=0 ppid=1939 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:02.996000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 13:59:03.009000 audit[2096]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:03.009000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc1fb5ccd0 a2=0 a3=0 items=0 ppid=1939 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:03.009000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 13:59:03.023000 audit[2102]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:03.023000 audit[2102]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffebebf0350 a2=0 a3=0 items=0 ppid=1939 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:03.023000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 13:59:03.027000 audit[2104]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:03.027000 audit[2104]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fffcbad9b80 a2=0 a3=0 items=0 ppid=1939 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:03.027000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 13:59:03.031000 audit[2106]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:03.031000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffb6936ff0 a2=0 a3=0 items=0 ppid=1939 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:03.031000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 13:59:03.034000 audit[2108]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:03.034000 audit[2108]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc52606720 a2=0 a3=0 items=0 ppid=1939 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:03.034000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 13:59:03.037000 audit[2110]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:03.037000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc4e7e6be0 a2=0 a3=0 items=0 ppid=1939 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:03.037000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 13:59:03.039194 systemd-networkd[1502]: docker0: Link UP Dec 16 13:59:03.045509 dockerd[1939]: time="2025-12-16T13:59:03.045430823Z" level=info msg="Loading containers: done." Dec 16 13:59:03.069700 dockerd[1939]: time="2025-12-16T13:59:03.069631336Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:59:03.069926 dockerd[1939]: time="2025-12-16T13:59:03.069770394Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:59:03.069926 dockerd[1939]: time="2025-12-16T13:59:03.069907323Z" level=info msg="Initializing buildkit" Dec 16 13:59:03.099074 dockerd[1939]: time="2025-12-16T13:59:03.098937731Z" level=info msg="Completed buildkit initialization" Dec 16 13:59:03.109056 dockerd[1939]: time="2025-12-16T13:59:03.108997126Z" level=info msg="Daemon has completed initialization" Dec 16 13:59:03.109189 dockerd[1939]: time="2025-12-16T13:59:03.109075890Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:59:03.109638 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:59:03.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:04.021435 containerd[1603]: time="2025-12-16T13:59:04.021358587Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 13:59:04.489013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1872342993.mount: Deactivated successfully. Dec 16 13:59:05.921167 containerd[1603]: time="2025-12-16T13:59:05.921098695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:05.922761 containerd[1603]: time="2025-12-16T13:59:05.922686233Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=27403437" Dec 16 13:59:05.924778 containerd[1603]: time="2025-12-16T13:59:05.923726677Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:05.926802 containerd[1603]: time="2025-12-16T13:59:05.926721248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:05.928366 containerd[1603]: time="2025-12-16T13:59:05.928091534Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 1.90667908s" Dec 16 13:59:05.928366 containerd[1603]: time="2025-12-16T13:59:05.928155328Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 13:59:05.929075 containerd[1603]: time="2025-12-16T13:59:05.929045808Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 13:59:08.301143 containerd[1603]: time="2025-12-16T13:59:08.301056227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:08.302647 containerd[1603]: time="2025-12-16T13:59:08.302414053Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=0" Dec 16 13:59:08.303707 containerd[1603]: time="2025-12-16T13:59:08.303662660Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:08.306808 containerd[1603]: time="2025-12-16T13:59:08.306771188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:08.308183 containerd[1603]: time="2025-12-16T13:59:08.308015930Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 2.378819555s" Dec 16 13:59:08.308183 containerd[1603]: time="2025-12-16T13:59:08.308064460Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 13:59:08.309313 containerd[1603]: time="2025-12-16T13:59:08.309266680Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 13:59:09.581601 containerd[1603]: time="2025-12-16T13:59:09.581532007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:09.583094 containerd[1603]: time="2025-12-16T13:59:09.582858222Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=0" Dec 16 13:59:09.584227 containerd[1603]: time="2025-12-16T13:59:09.584177854Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:09.587407 containerd[1603]: time="2025-12-16T13:59:09.587366882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:09.588697 containerd[1603]: time="2025-12-16T13:59:09.588658939Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.279244682s" Dec 16 13:59:09.588861 containerd[1603]: time="2025-12-16T13:59:09.588835347Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 13:59:09.589883 containerd[1603]: time="2025-12-16T13:59:09.589420611Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 13:59:10.012611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:59:10.018463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:59:10.501714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:59:10.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:10.508765 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 13:59:10.508883 kernel: audit: type=1130 audit(1765893550.501:271): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:10.538323 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:59:10.622219 kubelet[2226]: E1216 13:59:10.622154 2226 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:59:10.626435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:59:10.626681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:59:10.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:59:10.627631 systemd[1]: kubelet.service: Consumed 254ms CPU time, 110.6M memory peak. Dec 16 13:59:10.649790 kernel: audit: type=1131 audit(1765893550.626:272): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:59:10.837947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4001285234.mount: Deactivated successfully. Dec 16 13:59:11.553248 containerd[1603]: time="2025-12-16T13:59:11.553176195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:11.555214 containerd[1603]: time="2025-12-16T13:59:11.554577155Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31157702" Dec 16 13:59:11.558109 containerd[1603]: time="2025-12-16T13:59:11.558051968Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:11.562765 containerd[1603]: time="2025-12-16T13:59:11.562709898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:11.563539 containerd[1603]: time="2025-12-16T13:59:11.563492037Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.97403025s" Dec 16 13:59:11.563639 containerd[1603]: time="2025-12-16T13:59:11.563544976Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 13:59:11.564204 containerd[1603]: time="2025-12-16T13:59:11.564170974Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 13:59:11.991313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075048750.mount: Deactivated successfully. Dec 16 13:59:13.116005 containerd[1603]: time="2025-12-16T13:59:13.115943612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:13.117502 containerd[1603]: time="2025-12-16T13:59:13.117458757Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Dec 16 13:59:13.118791 containerd[1603]: time="2025-12-16T13:59:13.118630564Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:13.122765 containerd[1603]: time="2025-12-16T13:59:13.121914609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:13.123387 containerd[1603]: time="2025-12-16T13:59:13.123348708Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.559133934s" Dec 16 13:59:13.123526 containerd[1603]: time="2025-12-16T13:59:13.123502301Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 13:59:13.124246 containerd[1603]: time="2025-12-16T13:59:13.124212700Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:59:13.497036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2667889492.mount: Deactivated successfully. Dec 16 13:59:13.504574 containerd[1603]: time="2025-12-16T13:59:13.504512467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:59:13.505719 containerd[1603]: time="2025-12-16T13:59:13.505470039Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=316581" Dec 16 13:59:13.506817 containerd[1603]: time="2025-12-16T13:59:13.506782423Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:59:13.509519 containerd[1603]: time="2025-12-16T13:59:13.509457691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:59:13.510787 containerd[1603]: time="2025-12-16T13:59:13.510378701Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 386.121208ms" Dec 16 13:59:13.510787 containerd[1603]: time="2025-12-16T13:59:13.510419947Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:59:13.511665 containerd[1603]: time="2025-12-16T13:59:13.511621370Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 13:59:13.855470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1310625760.mount: Deactivated successfully. Dec 16 13:59:16.256197 containerd[1603]: time="2025-12-16T13:59:16.256133761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:16.257781 containerd[1603]: time="2025-12-16T13:59:16.257711306Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55728979" Dec 16 13:59:16.259767 containerd[1603]: time="2025-12-16T13:59:16.259009456Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:16.262321 containerd[1603]: time="2025-12-16T13:59:16.262279501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:16.263887 containerd[1603]: time="2025-12-16T13:59:16.263842093Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.752184911s" Dec 16 13:59:16.263967 containerd[1603]: time="2025-12-16T13:59:16.263892306Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 13:59:17.044435 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 13:59:17.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:17.068784 kernel: audit: type=1131 audit(1765893557.044:273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:17.081000 audit: BPF prog-id=62 op=UNLOAD Dec 16 13:59:17.089779 kernel: audit: type=1334 audit(1765893557.081:274): prog-id=62 op=UNLOAD Dec 16 13:59:19.963293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:59:19.963832 systemd[1]: kubelet.service: Consumed 254ms CPU time, 110.6M memory peak. Dec 16 13:59:19.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:19.970148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:59:19.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:20.007405 kernel: audit: type=1130 audit(1765893559.963:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:20.007511 kernel: audit: type=1131 audit(1765893559.963:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:20.024462 systemd[1]: Reload requested from client PID 2376 ('systemctl') (unit session-10.scope)... Dec 16 13:59:20.024487 systemd[1]: Reloading... Dec 16 13:59:20.207312 zram_generator::config[2423]: No configuration found. Dec 16 13:59:20.573226 systemd[1]: Reloading finished in 548 ms. Dec 16 13:59:20.610795 kernel: audit: type=1334 audit(1765893560.600:277): prog-id=66 op=LOAD Dec 16 13:59:20.600000 audit: BPF prog-id=66 op=LOAD Dec 16 13:59:20.608000 audit: BPF prog-id=55 op=UNLOAD Dec 16 13:59:20.623797 kernel: audit: type=1334 audit(1765893560.608:278): prog-id=55 op=UNLOAD Dec 16 13:59:20.623891 kernel: audit: type=1334 audit(1765893560.608:279): prog-id=67 op=LOAD Dec 16 13:59:20.623931 kernel: audit: type=1334 audit(1765893560.608:280): prog-id=68 op=LOAD Dec 16 13:59:20.623976 kernel: audit: type=1334 audit(1765893560.608:281): prog-id=56 op=UNLOAD Dec 16 13:59:20.624016 kernel: audit: type=1334 audit(1765893560.608:282): prog-id=57 op=UNLOAD Dec 16 13:59:20.608000 audit: BPF prog-id=67 op=LOAD Dec 16 13:59:20.608000 audit: BPF prog-id=68 op=LOAD Dec 16 13:59:20.608000 audit: BPF prog-id=56 op=UNLOAD Dec 16 13:59:20.608000 audit: BPF prog-id=57 op=UNLOAD Dec 16 13:59:20.613000 audit: BPF prog-id=69 op=LOAD Dec 16 13:59:20.613000 audit: BPF prog-id=46 op=UNLOAD Dec 16 13:59:20.613000 audit: BPF prog-id=70 op=LOAD Dec 16 13:59:20.613000 audit: BPF prog-id=71 op=LOAD Dec 16 13:59:20.613000 audit: BPF prog-id=47 op=UNLOAD Dec 16 13:59:20.613000 audit: BPF prog-id=48 op=UNLOAD Dec 16 13:59:20.618000 audit: BPF prog-id=72 op=LOAD Dec 16 13:59:20.619000 audit: BPF prog-id=59 op=UNLOAD Dec 16 13:59:20.619000 audit: BPF prog-id=73 op=LOAD Dec 16 13:59:20.619000 audit: BPF prog-id=74 op=LOAD Dec 16 13:59:20.619000 audit: BPF prog-id=60 op=UNLOAD Dec 16 13:59:20.619000 audit: BPF prog-id=61 op=UNLOAD Dec 16 13:59:20.622000 audit: BPF prog-id=75 op=LOAD Dec 16 13:59:20.622000 audit: BPF prog-id=49 op=UNLOAD Dec 16 13:59:20.622000 audit: BPF prog-id=76 op=LOAD Dec 16 13:59:20.622000 audit: BPF prog-id=77 op=LOAD Dec 16 13:59:20.622000 audit: BPF prog-id=50 op=UNLOAD Dec 16 13:59:20.622000 audit: BPF prog-id=51 op=UNLOAD Dec 16 13:59:20.624000 audit: BPF prog-id=78 op=LOAD Dec 16 13:59:20.624000 audit: BPF prog-id=52 op=UNLOAD Dec 16 13:59:20.624000 audit: BPF prog-id=79 op=LOAD Dec 16 13:59:20.624000 audit: BPF prog-id=80 op=LOAD Dec 16 13:59:20.624000 audit: BPF prog-id=53 op=UNLOAD Dec 16 13:59:20.624000 audit: BPF prog-id=54 op=UNLOAD Dec 16 13:59:20.627000 audit: BPF prog-id=81 op=LOAD Dec 16 13:59:20.627000 audit: BPF prog-id=45 op=UNLOAD Dec 16 13:59:20.628000 audit: BPF prog-id=82 op=LOAD Dec 16 13:59:20.628000 audit: BPF prog-id=83 op=LOAD Dec 16 13:59:20.628000 audit: BPF prog-id=43 op=UNLOAD Dec 16 13:59:20.628000 audit: BPF prog-id=44 op=UNLOAD Dec 16 13:59:20.630000 audit: BPF prog-id=84 op=LOAD Dec 16 13:59:20.630000 audit: BPF prog-id=58 op=UNLOAD Dec 16 13:59:20.631000 audit: BPF prog-id=85 op=LOAD Dec 16 13:59:20.631000 audit: BPF prog-id=65 op=UNLOAD Dec 16 13:59:20.657703 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:59:20.657884 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:59:20.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 13:59:20.658373 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:59:20.658460 systemd[1]: kubelet.service: Consumed 187ms CPU time, 98.4M memory peak. Dec 16 13:59:20.661301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:59:21.264645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:59:21.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:21.278256 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:59:21.339810 kubelet[2472]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:59:21.339810 kubelet[2472]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:59:21.339810 kubelet[2472]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:59:21.339810 kubelet[2472]: I1216 13:59:21.339184 2472 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:59:22.156462 kubelet[2472]: I1216 13:59:22.156402 2472 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:59:22.156462 kubelet[2472]: I1216 13:59:22.156436 2472 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:59:22.156890 kubelet[2472]: I1216 13:59:22.156854 2472 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:59:22.206358 kubelet[2472]: E1216 13:59:22.206308 2472 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.79:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:59:22.207976 kubelet[2472]: I1216 13:59:22.207808 2472 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:59:22.223712 kubelet[2472]: I1216 13:59:22.223669 2472 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:59:22.228337 kubelet[2472]: I1216 13:59:22.228295 2472 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:59:22.228721 kubelet[2472]: I1216 13:59:22.228661 2472 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:59:22.229004 kubelet[2472]: I1216 13:59:22.228707 2472 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:59:22.230300 kubelet[2472]: I1216 13:59:22.230255 2472 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:59:22.230300 kubelet[2472]: I1216 13:59:22.230293 2472 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:59:22.230496 kubelet[2472]: I1216 13:59:22.230462 2472 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:59:22.235811 kubelet[2472]: I1216 13:59:22.235650 2472 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:59:22.235811 kubelet[2472]: I1216 13:59:22.235694 2472 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:59:22.235811 kubelet[2472]: I1216 13:59:22.235732 2472 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:59:22.235811 kubelet[2472]: I1216 13:59:22.235769 2472 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:59:22.243500 kubelet[2472]: W1216 13:59:22.243414 2472 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused Dec 16 13:59:22.243686 kubelet[2472]: E1216 13:59:22.243494 2472 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.79:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:59:22.243686 kubelet[2472]: I1216 13:59:22.243632 2472 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 13:59:22.244768 kubelet[2472]: I1216 13:59:22.244338 2472 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:59:22.244768 kubelet[2472]: W1216 13:59:22.244429 2472 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:59:22.249096 kubelet[2472]: I1216 13:59:22.249064 2472 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:59:22.249202 kubelet[2472]: I1216 13:59:22.249121 2472 server.go:1287] "Started kubelet" Dec 16 13:59:22.253037 kubelet[2472]: W1216 13:59:22.252984 2472 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused Dec 16 13:59:22.255351 kubelet[2472]: E1216 13:59:22.253947 2472 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.79:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:59:22.255351 kubelet[2472]: I1216 13:59:22.253173 2472 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:59:22.255351 kubelet[2472]: I1216 13:59:22.254504 2472 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:59:22.255351 kubelet[2472]: I1216 13:59:22.254958 2472 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:59:22.255795 kubelet[2472]: I1216 13:59:22.253124 2472 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:59:22.276801 kernel: kauditd_printk_skb: 36 callbacks suppressed Dec 16 13:59:22.276907 kernel: audit: type=1325 audit(1765893562.260:319): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:22.260000 audit[2483]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:22.277089 kubelet[2472]: I1216 13:59:22.266046 2472 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:59:22.277089 kubelet[2472]: I1216 13:59:22.267332 2472 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:59:22.277089 kubelet[2472]: E1216 13:59:22.267556 2472 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" Dec 16 13:59:22.277089 kubelet[2472]: I1216 13:59:22.272458 2472 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:59:22.277089 kubelet[2472]: I1216 13:59:22.272531 2472 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:59:22.260000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffded3f300 a2=0 a3=0 items=0 ppid=2472 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:22.313778 kernel: audit: type=1300 audit(1765893562.260:319): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffded3f300 a2=0 a3=0 items=0 ppid=2472 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:22.315006 kernel: audit: type=1327 audit(1765893562.260:319): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:59:22.260000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:59:22.315140 kubelet[2472]: I1216 13:59:22.314438 2472 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:59:22.315140 kubelet[2472]: E1216 13:59:22.314615 2472 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.79:6443: connect: connection refused" interval="200ms" Dec 16 13:59:22.316481 kubelet[2472]: W1216 13:59:22.316423 2472 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused Dec 16 13:59:22.318150 kubelet[2472]: E1216 13:59:22.318112 2472 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.79:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:59:22.325999 kubelet[2472]: I1216 13:59:22.325970 2472 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:59:22.326251 kubelet[2472]: I1216 13:59:22.326225 2472 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:59:22.329933 kubelet[2472]: I1216 13:59:22.328606 2472 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:59:22.262000 audit[2484]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:22.347960 kubelet[2472]: E1216 13:59:22.320028 2472 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal.1881b6d6b9019c20 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal,UID:ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal,},FirstTimestamp:2025-12-16 13:59:22.249092128 +0000 UTC m=+0.965459196,LastTimestamp:2025-12-16 13:59:22.249092128 +0000 UTC m=+0.965459196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal,}" Dec 16 13:59:22.262000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeddf510b0 a2=0 a3=0 items=0 ppid=2472 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:22.363314 kubelet[2472]: E1216 13:59:22.350553 2472 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:59:22.367778 kubelet[2472]: E1216 13:59:22.367725 2472 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" Dec 16 13:59:22.372042 kubelet[2472]: I1216 13:59:22.372002 2472 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:59:22.372396 kubelet[2472]: I1216 13:59:22.372315 2472 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:59:22.372396 kubelet[2472]: I1216 13:59:22.372344 2472 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:59:22.379863 kernel: audit: type=1325 audit(1765893562.262:320): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:22.379952 kernel: audit: type=1300 audit(1765893562.262:320): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeddf510b0 a2=0 a3=0 items=0 ppid=2472 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:22.380015 kernel: audit: type=1327 audit(1765893562.262:320): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:59:22.262000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:59:22.410738 kernel: audit: type=1325 audit(1765893562.269:321): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:22.269000 audit[2486]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:22.269000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffda53f7420 a2=0 a3=0 items=0 ppid=2472 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:22.440573 kubelet[2472]: I1216 13:59:22.419618 2472 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:59:22.440573 kubelet[2472]: I1216 13:59:22.425825 2472 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:59:22.440573 kubelet[2472]: I1216 13:59:22.426902 2472 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:59:22.440573 kubelet[2472]: I1216 13:59:22.426940 2472 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:59:22.440573 kubelet[2472]: I1216 13:59:22.428076 2472 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:59:22.440573 kubelet[2472]: W1216 13:59:22.427473 2472 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused Dec 16 13:59:22.440573 kubelet[2472]: E1216 13:59:22.428879 2472 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.79:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:59:22.440573 kubelet[2472]: E1216 13:59:22.429535 2472 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:59:22.269000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:59:22.460833 kernel: audit: type=1300 audit(1765893562.269:321): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffda53f7420 a2=0 a3=0 items=0 ppid=2472 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:22.460928 kernel: audit: type=1327 audit(1765893562.269:321): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:59:22.273000 audit[2488]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:22.461727 kubelet[2472]: I1216 13:59:22.461123 2472 policy_none.go:49] "None policy: Start" Dec 16 13:59:22.461727 kubelet[2472]: I1216 13:59:22.461153 2472 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:59:22.461727 kubelet[2472]: I1216 13:59:22.461176 2472 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:59:22.477034 kernel: audit: type=1325 audit(1765893562.273:322): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:22.273000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd86fa4a50 a2=0 a3=0 items=0 ppid=2472 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:22.273000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:59:22.418000 audit[2496]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:22.418000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff66332360 a2=0 a3=0 items=0 ppid=2472 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:22.418000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 13:59:22.422000 audit[2497]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:22.422000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcdd6c30d0 a2=0 a3=0 items=0 ppid=2472 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:22.422000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 13:59:22.423000 audit[2498]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:22.423000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdfc0bf450 a2=0 a3=0 items=0 ppid=2472 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:22.423000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 13:59:22.430000 audit[2501]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:22.430000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb28fa960 a2=0 a3=0 items=0 ppid=2472 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:22.430000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 13:59:22.431000 audit[2500]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:22.431000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebadb8aa0 a2=0 a3=0 items=0 ppid=2472 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:22.431000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 13:59:22.436000 audit[2503]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:22.436000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff01635f10 a2=0 a3=0 items=0 ppid=2472 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:22.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 13:59:22.436000 audit[2504]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:22.436000 audit[2504]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd349917b0 a2=0 a3=0 items=0 ppid=2472 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:22.436000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 13:59:22.439000 audit[2505]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:22.439000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff7e7f0e0 a2=0 a3=0 items=0 ppid=2472 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:22.439000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 13:59:22.478933 kubelet[2472]: E1216 13:59:22.476728 2472 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" Dec 16 13:59:22.486606 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:59:22.501269 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:59:22.507574 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:59:22.515647 kubelet[2472]: E1216 13:59:22.515606 2472 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.79:6443: connect: connection refused" interval="400ms" Dec 16 13:59:22.518942 kubelet[2472]: I1216 13:59:22.518886 2472 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:59:22.519195 kubelet[2472]: I1216 13:59:22.519159 2472 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:59:22.519286 kubelet[2472]: I1216 13:59:22.519184 2472 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:59:22.520260 kubelet[2472]: I1216 13:59:22.520095 2472 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:59:22.523569 kubelet[2472]: E1216 13:59:22.523487 2472 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:59:22.523654 kubelet[2472]: E1216 13:59:22.523625 2472 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" Dec 16 13:59:22.551670 systemd[1]: Created slice kubepods-burstable-pod428ef263de6ae06547cd911342c6f899.slice - libcontainer container kubepods-burstable-pod428ef263de6ae06547cd911342c6f899.slice. Dec 16 13:59:22.559766 kubelet[2472]: E1216 13:59:22.559708 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.565080 systemd[1]: Created slice kubepods-burstable-pod3cd7489d30dd83e6090b3ed395f32089.slice - libcontainer container kubepods-burstable-pod3cd7489d30dd83e6090b3ed395f32089.slice. Dec 16 13:59:22.572588 kubelet[2472]: E1216 13:59:22.572554 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.576474 systemd[1]: Created slice kubepods-burstable-pod8ea28b0fe41064d6fb8a5431b2ac4a8d.slice - libcontainer container kubepods-burstable-pod8ea28b0fe41064d6fb8a5431b2ac4a8d.slice. Dec 16 13:59:22.578278 kubelet[2472]: I1216 13:59:22.578246 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/428ef263de6ae06547cd911342c6f899-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"428ef263de6ae06547cd911342c6f899\") " pod="kube-system/kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.578465 kubelet[2472]: I1216 13:59:22.578432 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/428ef263de6ae06547cd911342c6f899-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"428ef263de6ae06547cd911342c6f899\") " pod="kube-system/kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.578650 kubelet[2472]: I1216 13:59:22.578614 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3cd7489d30dd83e6090b3ed395f32089-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"3cd7489d30dd83e6090b3ed395f32089\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.578909 kubelet[2472]: I1216 13:59:22.578802 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cd7489d30dd83e6090b3ed395f32089-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"3cd7489d30dd83e6090b3ed395f32089\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.578909 kubelet[2472]: I1216 13:59:22.578868 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ea28b0fe41064d6fb8a5431b2ac4a8d-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"8ea28b0fe41064d6fb8a5431b2ac4a8d\") " pod="kube-system/kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.579085 kubelet[2472]: I1216 13:59:22.579060 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/428ef263de6ae06547cd911342c6f899-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"428ef263de6ae06547cd911342c6f899\") " pod="kube-system/kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.579262 kubelet[2472]: I1216 13:59:22.579240 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3cd7489d30dd83e6090b3ed395f32089-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"3cd7489d30dd83e6090b3ed395f32089\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.579519 kubelet[2472]: I1216 13:59:22.579390 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3cd7489d30dd83e6090b3ed395f32089-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"3cd7489d30dd83e6090b3ed395f32089\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.579519 kubelet[2472]: I1216 13:59:22.579473 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3cd7489d30dd83e6090b3ed395f32089-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"3cd7489d30dd83e6090b3ed395f32089\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.579898 kubelet[2472]: E1216 13:59:22.579861 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.624703 kubelet[2472]: I1216 13:59:22.624669 2472 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.625205 kubelet[2472]: E1216 13:59:22.625154 2472 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.79:6443/api/v1/nodes\": dial tcp 10.128.0.79:6443: connect: connection refused" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.830489 kubelet[2472]: I1216 13:59:22.830420 2472 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.831009 kubelet[2472]: E1216 13:59:22.830949 2472 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.79:6443/api/v1/nodes\": dial tcp 10.128.0.79:6443: connect: connection refused" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:22.862064 containerd[1603]: time="2025-12-16T13:59:22.861986036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal,Uid:428ef263de6ae06547cd911342c6f899,Namespace:kube-system,Attempt:0,}" Dec 16 13:59:22.874166 containerd[1603]: time="2025-12-16T13:59:22.874096497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal,Uid:3cd7489d30dd83e6090b3ed395f32089,Namespace:kube-system,Attempt:0,}" Dec 16 13:59:22.889296 containerd[1603]: time="2025-12-16T13:59:22.889231849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal,Uid:8ea28b0fe41064d6fb8a5431b2ac4a8d,Namespace:kube-system,Attempt:0,}" Dec 16 13:59:22.894400 containerd[1603]: time="2025-12-16T13:59:22.894351703Z" level=info msg="connecting to shim 40a280a1cfa496ed0b277d0300b7f95e86f43608f4bb58f841d01e1deea11910" address="unix:///run/containerd/s/d31b79dff9e166c0ac753df881a713520319321122ffb8a4e28665df27cc4899" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:59:22.918771 kubelet[2472]: E1216 13:59:22.917341 2472 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.79:6443: connect: connection refused" interval="800ms" Dec 16 13:59:22.934552 containerd[1603]: time="2025-12-16T13:59:22.934207594Z" level=info msg="connecting to shim 8a165bbad2f978706143cb5490506765f5c2d4dad9f3ff5843b96e7ba0ffc11d" address="unix:///run/containerd/s/7d8e2db6fc1300decc34945d9d265a2c5c987090773afa0b32d4aa4173a43508" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:59:22.971775 containerd[1603]: time="2025-12-16T13:59:22.969971414Z" level=info msg="connecting to shim d6a6a32efb4092265a23405bb8d0c788c2fda6a943e7f4701670ed2c15d59371" address="unix:///run/containerd/s/93f99e7b61e4ff59c450c4ec97825144cf37f2403eab22c7ea29d6b3f342cc04" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:59:22.979056 systemd[1]: Started cri-containerd-40a280a1cfa496ed0b277d0300b7f95e86f43608f4bb58f841d01e1deea11910.scope - libcontainer container 40a280a1cfa496ed0b277d0300b7f95e86f43608f4bb58f841d01e1deea11910. Dec 16 13:59:23.009046 systemd[1]: Started cri-containerd-8a165bbad2f978706143cb5490506765f5c2d4dad9f3ff5843b96e7ba0ffc11d.scope - libcontainer container 8a165bbad2f978706143cb5490506765f5c2d4dad9f3ff5843b96e7ba0ffc11d. Dec 16 13:59:23.022000 audit: BPF prog-id=86 op=LOAD Dec 16 13:59:23.024000 audit: BPF prog-id=87 op=LOAD Dec 16 13:59:23.024000 audit[2528]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2516 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430613238306131636661343936656430623237376430333030623766 Dec 16 13:59:23.024000 audit: BPF prog-id=87 op=UNLOAD Dec 16 13:59:23.024000 audit[2528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2516 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430613238306131636661343936656430623237376430333030623766 Dec 16 13:59:23.027000 audit: BPF prog-id=88 op=LOAD Dec 16 13:59:23.027000 audit[2528]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2516 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430613238306131636661343936656430623237376430333030623766 Dec 16 13:59:23.027000 audit: BPF prog-id=89 op=LOAD Dec 16 13:59:23.027000 audit[2528]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2516 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430613238306131636661343936656430623237376430333030623766 Dec 16 13:59:23.028000 audit: BPF prog-id=89 op=UNLOAD Dec 16 13:59:23.028000 audit[2528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2516 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430613238306131636661343936656430623237376430333030623766 Dec 16 13:59:23.029000 audit: BPF prog-id=88 op=UNLOAD Dec 16 13:59:23.029000 audit[2528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2516 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430613238306131636661343936656430623237376430333030623766 Dec 16 13:59:23.029000 audit: BPF prog-id=90 op=LOAD Dec 16 13:59:23.029000 audit[2528]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2516 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430613238306131636661343936656430623237376430333030623766 Dec 16 13:59:23.039000 audit: BPF prog-id=91 op=LOAD Dec 16 13:59:23.042000 audit: BPF prog-id=92 op=LOAD Dec 16 13:59:23.042000 audit[2564]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2537 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861313635626261643266393738373036313433636235343930353036 Dec 16 13:59:23.044000 audit: BPF prog-id=92 op=UNLOAD Dec 16 13:59:23.044000 audit[2564]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861313635626261643266393738373036313433636235343930353036 Dec 16 13:59:23.046000 audit: BPF prog-id=93 op=LOAD Dec 16 13:59:23.046000 audit[2564]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2537 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861313635626261643266393738373036313433636235343930353036 Dec 16 13:59:23.047000 audit: BPF prog-id=94 op=LOAD Dec 16 13:59:23.047000 audit[2564]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2537 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861313635626261643266393738373036313433636235343930353036 Dec 16 13:59:23.047000 audit: BPF prog-id=94 op=UNLOAD Dec 16 13:59:23.047000 audit[2564]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861313635626261643266393738373036313433636235343930353036 Dec 16 13:59:23.048000 audit: BPF prog-id=93 op=UNLOAD Dec 16 13:59:23.048000 audit[2564]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861313635626261643266393738373036313433636235343930353036 Dec 16 13:59:23.048000 audit: BPF prog-id=95 op=LOAD Dec 16 13:59:23.048000 audit[2564]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2537 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.048000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861313635626261643266393738373036313433636235343930353036 Dec 16 13:59:23.053176 systemd[1]: Started cri-containerd-d6a6a32efb4092265a23405bb8d0c788c2fda6a943e7f4701670ed2c15d59371.scope - libcontainer container d6a6a32efb4092265a23405bb8d0c788c2fda6a943e7f4701670ed2c15d59371. Dec 16 13:59:23.090000 audit: BPF prog-id=96 op=LOAD Dec 16 13:59:23.091000 audit: BPF prog-id=97 op=LOAD Dec 16 13:59:23.091000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2567 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436613661333265666234303932323635613233343035626238643063 Dec 16 13:59:23.091000 audit: BPF prog-id=97 op=UNLOAD Dec 16 13:59:23.091000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2567 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436613661333265666234303932323635613233343035626238643063 Dec 16 13:59:23.091000 audit: BPF prog-id=98 op=LOAD Dec 16 13:59:23.091000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2567 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436613661333265666234303932323635613233343035626238643063 Dec 16 13:59:23.091000 audit: BPF prog-id=99 op=LOAD Dec 16 13:59:23.091000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2567 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436613661333265666234303932323635613233343035626238643063 Dec 16 13:59:23.092000 audit: BPF prog-id=99 op=UNLOAD Dec 16 13:59:23.092000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2567 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436613661333265666234303932323635613233343035626238643063 Dec 16 13:59:23.092000 audit: BPF prog-id=98 op=UNLOAD Dec 16 13:59:23.092000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2567 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436613661333265666234303932323635613233343035626238643063 Dec 16 13:59:23.092000 audit: BPF prog-id=100 op=LOAD Dec 16 13:59:23.092000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2567 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436613661333265666234303932323635613233343035626238643063 Dec 16 13:59:23.151010 containerd[1603]: time="2025-12-16T13:59:23.150882468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal,Uid:428ef263de6ae06547cd911342c6f899,Namespace:kube-system,Attempt:0,} returns sandbox id \"40a280a1cfa496ed0b277d0300b7f95e86f43608f4bb58f841d01e1deea11910\"" Dec 16 13:59:23.154771 kubelet[2472]: E1216 13:59:23.154698 2472 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-21291" Dec 16 13:59:23.157815 containerd[1603]: time="2025-12-16T13:59:23.157132015Z" level=info msg="CreateContainer within sandbox \"40a280a1cfa496ed0b277d0300b7f95e86f43608f4bb58f841d01e1deea11910\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:59:23.175193 containerd[1603]: time="2025-12-16T13:59:23.175148739Z" level=info msg="Container c349ec171711f7d2af95294620f9d6f4d7a8eb6a8807b6ad909388c872ce99cf: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:59:23.178475 containerd[1603]: time="2025-12-16T13:59:23.178421778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal,Uid:3cd7489d30dd83e6090b3ed395f32089,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a165bbad2f978706143cb5490506765f5c2d4dad9f3ff5843b96e7ba0ffc11d\"" Dec 16 13:59:23.181587 kubelet[2472]: E1216 13:59:23.181529 2472 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flat" Dec 16 13:59:23.183723 containerd[1603]: time="2025-12-16T13:59:23.183686676Z" level=info msg="CreateContainer within sandbox \"8a165bbad2f978706143cb5490506765f5c2d4dad9f3ff5843b96e7ba0ffc11d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:59:23.189533 containerd[1603]: time="2025-12-16T13:59:23.189489919Z" level=info msg="CreateContainer within sandbox \"40a280a1cfa496ed0b277d0300b7f95e86f43608f4bb58f841d01e1deea11910\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c349ec171711f7d2af95294620f9d6f4d7a8eb6a8807b6ad909388c872ce99cf\"" Dec 16 13:59:23.190969 containerd[1603]: time="2025-12-16T13:59:23.190931405Z" level=info msg="StartContainer for \"c349ec171711f7d2af95294620f9d6f4d7a8eb6a8807b6ad909388c872ce99cf\"" Dec 16 13:59:23.194531 containerd[1603]: time="2025-12-16T13:59:23.194463889Z" level=info msg="connecting to shim c349ec171711f7d2af95294620f9d6f4d7a8eb6a8807b6ad909388c872ce99cf" address="unix:///run/containerd/s/d31b79dff9e166c0ac753df881a713520319321122ffb8a4e28665df27cc4899" protocol=ttrpc version=3 Dec 16 13:59:23.196737 kubelet[2472]: W1216 13:59:23.196593 2472 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused Dec 16 13:59:23.196737 kubelet[2472]: E1216 13:59:23.196856 2472 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.79:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.79:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:59:23.213619 containerd[1603]: time="2025-12-16T13:59:23.213569525Z" level=info msg="Container db749e50eceaa548e1b5b177e4248a2537ac32a4f2c26a4d873b2d2513f9a0bb: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:59:23.215190 containerd[1603]: time="2025-12-16T13:59:23.215065002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal,Uid:8ea28b0fe41064d6fb8a5431b2ac4a8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6a6a32efb4092265a23405bb8d0c788c2fda6a943e7f4701670ed2c15d59371\"" Dec 16 13:59:23.217707 kubelet[2472]: E1216 13:59:23.217657 2472 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-21291" Dec 16 13:59:23.220044 containerd[1603]: time="2025-12-16T13:59:23.219924595Z" level=info msg="CreateContainer within sandbox \"d6a6a32efb4092265a23405bb8d0c788c2fda6a943e7f4701670ed2c15d59371\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:59:23.233299 containerd[1603]: time="2025-12-16T13:59:23.232451944Z" level=info msg="CreateContainer within sandbox \"8a165bbad2f978706143cb5490506765f5c2d4dad9f3ff5843b96e7ba0ffc11d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"db749e50eceaa548e1b5b177e4248a2537ac32a4f2c26a4d873b2d2513f9a0bb\"" Dec 16 13:59:23.234278 containerd[1603]: time="2025-12-16T13:59:23.234223084Z" level=info msg="StartContainer for \"db749e50eceaa548e1b5b177e4248a2537ac32a4f2c26a4d873b2d2513f9a0bb\"" Dec 16 13:59:23.237691 containerd[1603]: time="2025-12-16T13:59:23.237649651Z" level=info msg="connecting to shim db749e50eceaa548e1b5b177e4248a2537ac32a4f2c26a4d873b2d2513f9a0bb" address="unix:///run/containerd/s/7d8e2db6fc1300decc34945d9d265a2c5c987090773afa0b32d4aa4173a43508" protocol=ttrpc version=3 Dec 16 13:59:23.238443 systemd[1]: Started cri-containerd-c349ec171711f7d2af95294620f9d6f4d7a8eb6a8807b6ad909388c872ce99cf.scope - libcontainer container c349ec171711f7d2af95294620f9d6f4d7a8eb6a8807b6ad909388c872ce99cf. Dec 16 13:59:23.243859 containerd[1603]: time="2025-12-16T13:59:23.243775736Z" level=info msg="Container d39c6e0f2f09efa7999f09cf8f3cc7102028942643c55c9a710ed1427d94065b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:59:23.247478 kubelet[2472]: I1216 13:59:23.247446 2472 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:23.248010 kubelet[2472]: E1216 13:59:23.247898 2472 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.79:6443/api/v1/nodes\": dial tcp 10.128.0.79:6443: connect: connection refused" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:23.261021 containerd[1603]: time="2025-12-16T13:59:23.260906860Z" level=info msg="CreateContainer within sandbox \"d6a6a32efb4092265a23405bb8d0c788c2fda6a943e7f4701670ed2c15d59371\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d39c6e0f2f09efa7999f09cf8f3cc7102028942643c55c9a710ed1427d94065b\"" Dec 16 13:59:23.264024 containerd[1603]: time="2025-12-16T13:59:23.263967021Z" level=info msg="StartContainer for \"d39c6e0f2f09efa7999f09cf8f3cc7102028942643c55c9a710ed1427d94065b\"" Dec 16 13:59:23.265673 containerd[1603]: time="2025-12-16T13:59:23.265633410Z" level=info msg="connecting to shim d39c6e0f2f09efa7999f09cf8f3cc7102028942643c55c9a710ed1427d94065b" address="unix:///run/containerd/s/93f99e7b61e4ff59c450c4ec97825144cf37f2403eab22c7ea29d6b3f342cc04" protocol=ttrpc version=3 Dec 16 13:59:23.279000 audit: BPF prog-id=101 op=LOAD Dec 16 13:59:23.280000 audit: BPF prog-id=102 op=LOAD Dec 16 13:59:23.280000 audit[2648]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2516 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333343965633137313731316637643261663935323934363230663964 Dec 16 13:59:23.280000 audit: BPF prog-id=102 op=UNLOAD Dec 16 13:59:23.280000 audit[2648]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2516 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333343965633137313731316637643261663935323934363230663964 Dec 16 13:59:23.282000 audit: BPF prog-id=103 op=LOAD Dec 16 13:59:23.282000 audit[2648]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2516 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333343965633137313731316637643261663935323934363230663964 Dec 16 13:59:23.282000 audit: BPF prog-id=104 op=LOAD Dec 16 13:59:23.282000 audit[2648]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2516 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333343965633137313731316637643261663935323934363230663964 Dec 16 13:59:23.282000 audit: BPF prog-id=104 op=UNLOAD Dec 16 13:59:23.282000 audit[2648]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2516 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333343965633137313731316637643261663935323934363230663964 Dec 16 13:59:23.282000 audit: BPF prog-id=103 op=UNLOAD Dec 16 13:59:23.282000 audit[2648]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2516 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333343965633137313731316637643261663935323934363230663964 Dec 16 13:59:23.282000 audit: BPF prog-id=105 op=LOAD Dec 16 13:59:23.282000 audit[2648]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2516 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333343965633137313731316637643261663935323934363230663964 Dec 16 13:59:23.290283 systemd[1]: Started cri-containerd-db749e50eceaa548e1b5b177e4248a2537ac32a4f2c26a4d873b2d2513f9a0bb.scope - libcontainer container db749e50eceaa548e1b5b177e4248a2537ac32a4f2c26a4d873b2d2513f9a0bb. Dec 16 13:59:23.324185 systemd[1]: Started cri-containerd-d39c6e0f2f09efa7999f09cf8f3cc7102028942643c55c9a710ed1427d94065b.scope - libcontainer container d39c6e0f2f09efa7999f09cf8f3cc7102028942643c55c9a710ed1427d94065b. Dec 16 13:59:23.338000 audit: BPF prog-id=106 op=LOAD Dec 16 13:59:23.340000 audit: BPF prog-id=107 op=LOAD Dec 16 13:59:23.340000 audit[2662]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2537 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462373439653530656365616135343865316235623137376534323438 Dec 16 13:59:23.340000 audit: BPF prog-id=107 op=UNLOAD Dec 16 13:59:23.340000 audit[2662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462373439653530656365616135343865316235623137376534323438 Dec 16 13:59:23.341000 audit: BPF prog-id=108 op=LOAD Dec 16 13:59:23.341000 audit[2662]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2537 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462373439653530656365616135343865316235623137376534323438 Dec 16 13:59:23.341000 audit: BPF prog-id=109 op=LOAD Dec 16 13:59:23.341000 audit[2662]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2537 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462373439653530656365616135343865316235623137376534323438 Dec 16 13:59:23.341000 audit: BPF prog-id=109 op=UNLOAD Dec 16 13:59:23.341000 audit[2662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462373439653530656365616135343865316235623137376534323438 Dec 16 13:59:23.341000 audit: BPF prog-id=108 op=UNLOAD Dec 16 13:59:23.341000 audit[2662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462373439653530656365616135343865316235623137376534323438 Dec 16 13:59:23.341000 audit: BPF prog-id=110 op=LOAD Dec 16 13:59:23.341000 audit[2662]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2537 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462373439653530656365616135343865316235623137376534323438 Dec 16 13:59:23.375998 kubelet[2472]: W1216 13:59:23.375932 2472 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused Dec 16 13:59:23.377930 kubelet[2472]: E1216 13:59:23.377844 2472 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.79:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:59:23.384576 containerd[1603]: time="2025-12-16T13:59:23.384496661Z" level=info msg="StartContainer for \"c349ec171711f7d2af95294620f9d6f4d7a8eb6a8807b6ad909388c872ce99cf\" returns successfully" Dec 16 13:59:23.411000 audit: BPF prog-id=111 op=LOAD Dec 16 13:59:23.413000 audit: BPF prog-id=112 op=LOAD Dec 16 13:59:23.413000 audit[2680]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2567 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433396336653066326630396566613739393966303963663866336363 Dec 16 13:59:23.414000 audit: BPF prog-id=112 op=UNLOAD Dec 16 13:59:23.414000 audit[2680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2567 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433396336653066326630396566613739393966303963663866336363 Dec 16 13:59:23.414000 audit: BPF prog-id=113 op=LOAD Dec 16 13:59:23.414000 audit[2680]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2567 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433396336653066326630396566613739393966303963663866336363 Dec 16 13:59:23.415000 audit: BPF prog-id=114 op=LOAD Dec 16 13:59:23.415000 audit[2680]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2567 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433396336653066326630396566613739393966303963663866336363 Dec 16 13:59:23.415000 audit: BPF prog-id=114 op=UNLOAD Dec 16 13:59:23.415000 audit[2680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2567 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433396336653066326630396566613739393966303963663866336363 Dec 16 13:59:23.415000 audit: BPF prog-id=113 op=UNLOAD Dec 16 13:59:23.415000 audit[2680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2567 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433396336653066326630396566613739393966303963663866336363 Dec 16 13:59:23.415000 audit: BPF prog-id=115 op=LOAD Dec 16 13:59:23.415000 audit[2680]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2567 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:23.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433396336653066326630396566613739393966303963663866336363 Dec 16 13:59:23.454644 containerd[1603]: time="2025-12-16T13:59:23.454594460Z" level=info msg="StartContainer for \"db749e50eceaa548e1b5b177e4248a2537ac32a4f2c26a4d873b2d2513f9a0bb\" returns successfully" Dec 16 13:59:23.465676 kubelet[2472]: E1216 13:59:23.465645 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:23.562107 containerd[1603]: time="2025-12-16T13:59:23.562029632Z" level=info msg="StartContainer for \"d39c6e0f2f09efa7999f09cf8f3cc7102028942643c55c9a710ed1427d94065b\" returns successfully" Dec 16 13:59:24.054661 kubelet[2472]: I1216 13:59:24.054271 2472 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:24.482582 kubelet[2472]: E1216 13:59:24.482460 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:24.488767 kubelet[2472]: E1216 13:59:24.488063 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:24.488767 kubelet[2472]: E1216 13:59:24.488609 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:25.490387 kubelet[2472]: E1216 13:59:25.490340 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:25.490971 kubelet[2472]: E1216 13:59:25.490846 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:26.499064 kubelet[2472]: E1216 13:59:26.499017 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:26.499704 kubelet[2472]: E1216 13:59:26.499671 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:26.913233 kubelet[2472]: E1216 13:59:26.913152 2472 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:26.943412 kubelet[2472]: I1216 13:59:26.943115 2472 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:26.968622 kubelet[2472]: I1216 13:59:26.968582 2472 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:27.001076 kubelet[2472]: E1216 13:59:27.000807 2472 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:27.001076 kubelet[2472]: I1216 13:59:27.000848 2472 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:27.010848 kubelet[2472]: E1216 13:59:27.010810 2472 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:27.011231 kubelet[2472]: I1216 13:59:27.011017 2472 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:27.015719 kubelet[2472]: E1216 13:59:27.015682 2472 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:27.255362 kubelet[2472]: I1216 13:59:27.255215 2472 apiserver.go:52] "Watching apiserver" Dec 16 13:59:27.273343 kubelet[2472]: I1216 13:59:27.273303 2472 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:59:27.910774 kubelet[2472]: I1216 13:59:27.910014 2472 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:27.930347 kubelet[2472]: W1216 13:59:27.930299 2472 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 16 13:59:28.771774 systemd[1]: Reload requested from client PID 2748 ('systemctl') (unit session-10.scope)... Dec 16 13:59:28.771801 systemd[1]: Reloading... Dec 16 13:59:29.001785 zram_generator::config[2795]: No configuration found. Dec 16 13:59:29.320057 systemd[1]: Reloading finished in 547 ms. Dec 16 13:59:29.363721 kubelet[2472]: I1216 13:59:29.363685 2472 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:59:29.364230 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:59:29.383560 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:59:29.383986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:59:29.384076 systemd[1]: kubelet.service: Consumed 1.487s CPU time, 131.8M memory peak. Dec 16 13:59:29.411186 kernel: kauditd_printk_skb: 158 callbacks suppressed Dec 16 13:59:29.411299 kernel: audit: type=1131 audit(1765893569.383:379): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:29.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:29.390340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:59:29.396000 audit: BPF prog-id=116 op=LOAD Dec 16 13:59:29.396000 audit: BPF prog-id=69 op=UNLOAD Dec 16 13:59:29.420771 kernel: audit: type=1334 audit(1765893569.396:380): prog-id=116 op=LOAD Dec 16 13:59:29.420828 kernel: audit: type=1334 audit(1765893569.396:381): prog-id=69 op=UNLOAD Dec 16 13:59:29.435102 kernel: audit: type=1334 audit(1765893569.397:382): prog-id=117 op=LOAD Dec 16 13:59:29.397000 audit: BPF prog-id=117 op=LOAD Dec 16 13:59:29.442670 kernel: audit: type=1334 audit(1765893569.397:383): prog-id=118 op=LOAD Dec 16 13:59:29.397000 audit: BPF prog-id=118 op=LOAD Dec 16 13:59:29.397000 audit: BPF prog-id=70 op=UNLOAD Dec 16 13:59:29.458454 kernel: audit: type=1334 audit(1765893569.397:384): prog-id=70 op=UNLOAD Dec 16 13:59:29.458553 kernel: audit: type=1334 audit(1765893569.397:385): prog-id=71 op=UNLOAD Dec 16 13:59:29.458604 kernel: audit: type=1334 audit(1765893569.400:386): prog-id=119 op=LOAD Dec 16 13:59:29.397000 audit: BPF prog-id=71 op=UNLOAD Dec 16 13:59:29.400000 audit: BPF prog-id=119 op=LOAD Dec 16 13:59:29.400000 audit: BPF prog-id=84 op=UNLOAD Dec 16 13:59:29.472579 kernel: audit: type=1334 audit(1765893569.400:387): prog-id=84 op=UNLOAD Dec 16 13:59:29.472671 kernel: audit: type=1334 audit(1765893569.401:388): prog-id=120 op=LOAD Dec 16 13:59:29.401000 audit: BPF prog-id=120 op=LOAD Dec 16 13:59:29.401000 audit: BPF prog-id=75 op=UNLOAD Dec 16 13:59:29.401000 audit: BPF prog-id=121 op=LOAD Dec 16 13:59:29.401000 audit: BPF prog-id=122 op=LOAD Dec 16 13:59:29.401000 audit: BPF prog-id=76 op=UNLOAD Dec 16 13:59:29.401000 audit: BPF prog-id=77 op=UNLOAD Dec 16 13:59:29.402000 audit: BPF prog-id=123 op=LOAD Dec 16 13:59:29.402000 audit: BPF prog-id=78 op=UNLOAD Dec 16 13:59:29.402000 audit: BPF prog-id=124 op=LOAD Dec 16 13:59:29.402000 audit: BPF prog-id=125 op=LOAD Dec 16 13:59:29.403000 audit: BPF prog-id=79 op=UNLOAD Dec 16 13:59:29.403000 audit: BPF prog-id=80 op=UNLOAD Dec 16 13:59:29.405000 audit: BPF prog-id=126 op=LOAD Dec 16 13:59:29.405000 audit: BPF prog-id=85 op=UNLOAD Dec 16 13:59:29.406000 audit: BPF prog-id=127 op=LOAD Dec 16 13:59:29.406000 audit: BPF prog-id=66 op=UNLOAD Dec 16 13:59:29.406000 audit: BPF prog-id=128 op=LOAD Dec 16 13:59:29.406000 audit: BPF prog-id=129 op=LOAD Dec 16 13:59:29.406000 audit: BPF prog-id=67 op=UNLOAD Dec 16 13:59:29.406000 audit: BPF prog-id=68 op=UNLOAD Dec 16 13:59:29.407000 audit: BPF prog-id=130 op=LOAD Dec 16 13:59:29.407000 audit: BPF prog-id=131 op=LOAD Dec 16 13:59:29.407000 audit: BPF prog-id=82 op=UNLOAD Dec 16 13:59:29.407000 audit: BPF prog-id=83 op=UNLOAD Dec 16 13:59:29.409000 audit: BPF prog-id=132 op=LOAD Dec 16 13:59:29.409000 audit: BPF prog-id=81 op=UNLOAD Dec 16 13:59:29.413000 audit: BPF prog-id=133 op=LOAD Dec 16 13:59:29.429000 audit: BPF prog-id=72 op=UNLOAD Dec 16 13:59:29.429000 audit: BPF prog-id=134 op=LOAD Dec 16 13:59:29.429000 audit: BPF prog-id=135 op=LOAD Dec 16 13:59:29.429000 audit: BPF prog-id=73 op=UNLOAD Dec 16 13:59:29.429000 audit: BPF prog-id=74 op=UNLOAD Dec 16 13:59:29.811938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:59:29.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:29.826262 (kubelet)[2843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:59:29.901434 kubelet[2843]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:59:29.901434 kubelet[2843]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:59:29.901434 kubelet[2843]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:59:29.901434 kubelet[2843]: I1216 13:59:29.901363 2843 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:59:29.916774 kubelet[2843]: I1216 13:59:29.915425 2843 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:59:29.916774 kubelet[2843]: I1216 13:59:29.915457 2843 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:59:29.916774 kubelet[2843]: I1216 13:59:29.915780 2843 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:59:29.917596 kubelet[2843]: I1216 13:59:29.917556 2843 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 13:59:29.922384 kubelet[2843]: I1216 13:59:29.921249 2843 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:59:29.930908 kubelet[2843]: I1216 13:59:29.930889 2843 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:59:29.935223 kubelet[2843]: I1216 13:59:29.935066 2843 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:59:29.935668 kubelet[2843]: I1216 13:59:29.935600 2843 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:59:29.937198 kubelet[2843]: I1216 13:59:29.935646 2843 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:59:29.937198 kubelet[2843]: I1216 13:59:29.935998 2843 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:59:29.937198 kubelet[2843]: I1216 13:59:29.936016 2843 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:59:29.937198 kubelet[2843]: I1216 13:59:29.936079 2843 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:59:29.937530 kubelet[2843]: I1216 13:59:29.936281 2843 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:59:29.937530 kubelet[2843]: I1216 13:59:29.936309 2843 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:59:29.937530 kubelet[2843]: I1216 13:59:29.936346 2843 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:59:29.937530 kubelet[2843]: I1216 13:59:29.936371 2843 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:59:29.946854 kubelet[2843]: I1216 13:59:29.946204 2843 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 13:59:29.952312 kubelet[2843]: I1216 13:59:29.952282 2843 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:59:29.956719 kubelet[2843]: I1216 13:59:29.956383 2843 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:59:29.956719 kubelet[2843]: I1216 13:59:29.956426 2843 server.go:1287] "Started kubelet" Dec 16 13:59:29.962791 kubelet[2843]: I1216 13:59:29.961613 2843 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:59:29.962791 kubelet[2843]: I1216 13:59:29.962052 2843 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:59:29.964251 kubelet[2843]: I1216 13:59:29.964165 2843 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:59:29.966092 kubelet[2843]: I1216 13:59:29.966013 2843 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:59:29.973765 kubelet[2843]: I1216 13:59:29.973650 2843 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:59:29.982978 kubelet[2843]: I1216 13:59:29.982950 2843 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:59:29.989243 kubelet[2843]: I1216 13:59:29.989220 2843 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:59:29.989523 kubelet[2843]: E1216 13:59:29.989491 2843 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" not found" Dec 16 13:59:29.990334 kubelet[2843]: I1216 13:59:29.990168 2843 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:59:29.990432 kubelet[2843]: I1216 13:59:29.990338 2843 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:59:30.003084 kubelet[2843]: I1216 13:59:30.002100 2843 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:59:30.003084 kubelet[2843]: I1216 13:59:30.002226 2843 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:59:30.008832 kubelet[2843]: E1216 13:59:30.008526 2843 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:59:30.009396 kubelet[2843]: I1216 13:59:30.009337 2843 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:59:30.026785 kubelet[2843]: I1216 13:59:30.026710 2843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:59:30.028494 kubelet[2843]: I1216 13:59:30.028449 2843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:59:30.028494 kubelet[2843]: I1216 13:59:30.028486 2843 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:59:30.028642 kubelet[2843]: I1216 13:59:30.028519 2843 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:59:30.028642 kubelet[2843]: I1216 13:59:30.028540 2843 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:59:30.028642 kubelet[2843]: E1216 13:59:30.028606 2843 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:59:30.106892 kubelet[2843]: I1216 13:59:30.106440 2843 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:59:30.106892 kubelet[2843]: I1216 13:59:30.106489 2843 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:59:30.106892 kubelet[2843]: I1216 13:59:30.106523 2843 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:59:30.106892 kubelet[2843]: I1216 13:59:30.106867 2843 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:59:30.107165 kubelet[2843]: I1216 13:59:30.106886 2843 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:59:30.107338 kubelet[2843]: I1216 13:59:30.107280 2843 policy_none.go:49] "None policy: Start" Dec 16 13:59:30.107338 kubelet[2843]: I1216 13:59:30.107311 2843 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:59:30.107338 kubelet[2843]: I1216 13:59:30.107332 2843 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:59:30.107768 kubelet[2843]: I1216 13:59:30.107534 2843 state_mem.go:75] "Updated machine memory state" Dec 16 13:59:30.122725 kubelet[2843]: I1216 13:59:30.121130 2843 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:59:30.122725 kubelet[2843]: I1216 13:59:30.121373 2843 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:59:30.122725 kubelet[2843]: I1216 13:59:30.121390 2843 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:59:30.124217 kubelet[2843]: I1216 13:59:30.124195 2843 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:59:30.127965 kubelet[2843]: E1216 13:59:30.127918 2843 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:59:30.132774 kubelet[2843]: I1216 13:59:30.130278 2843 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.132774 kubelet[2843]: I1216 13:59:30.132589 2843 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.133204 kubelet[2843]: I1216 13:59:30.133183 2843 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.156218 kubelet[2843]: W1216 13:59:30.156182 2843 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 16 13:59:30.161089 kubelet[2843]: W1216 13:59:30.160816 2843 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 16 13:59:30.161089 kubelet[2843]: E1216 13:59:30.160878 2843 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.161089 kubelet[2843]: W1216 13:59:30.160949 2843 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 16 13:59:30.190853 kubelet[2843]: I1216 13:59:30.190821 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/428ef263de6ae06547cd911342c6f899-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"428ef263de6ae06547cd911342c6f899\") " pod="kube-system/kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.190982 kubelet[2843]: I1216 13:59:30.190957 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3cd7489d30dd83e6090b3ed395f32089-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"3cd7489d30dd83e6090b3ed395f32089\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.191041 kubelet[2843]: I1216 13:59:30.190996 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3cd7489d30dd83e6090b3ed395f32089-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"3cd7489d30dd83e6090b3ed395f32089\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.191041 kubelet[2843]: I1216 13:59:30.191029 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8ea28b0fe41064d6fb8a5431b2ac4a8d-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"8ea28b0fe41064d6fb8a5431b2ac4a8d\") " pod="kube-system/kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.191209 kubelet[2843]: I1216 13:59:30.191058 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cd7489d30dd83e6090b3ed395f32089-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"3cd7489d30dd83e6090b3ed395f32089\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.191209 kubelet[2843]: I1216 13:59:30.191087 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/428ef263de6ae06547cd911342c6f899-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"428ef263de6ae06547cd911342c6f899\") " pod="kube-system/kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.191209 kubelet[2843]: I1216 13:59:30.191115 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/428ef263de6ae06547cd911342c6f899-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"428ef263de6ae06547cd911342c6f899\") " pod="kube-system/kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.191209 kubelet[2843]: I1216 13:59:30.191146 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3cd7489d30dd83e6090b3ed395f32089-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"3cd7489d30dd83e6090b3ed395f32089\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.191387 kubelet[2843]: I1216 13:59:30.191174 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3cd7489d30dd83e6090b3ed395f32089-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" (UID: \"3cd7489d30dd83e6090b3ed395f32089\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.242207 kubelet[2843]: I1216 13:59:30.242169 2843 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.253775 kubelet[2843]: I1216 13:59:30.253652 2843 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.253903 kubelet[2843]: I1216 13:59:30.253870 2843 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:30.556606 update_engine[1573]: I20251216 13:59:30.556248 1573 update_attempter.cc:509] Updating boot flags... Dec 16 13:59:30.939948 kubelet[2843]: I1216 13:59:30.937716 2843 apiserver.go:52] "Watching apiserver" Dec 16 13:59:30.990379 kubelet[2843]: I1216 13:59:30.990309 2843 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:59:31.090995 kubelet[2843]: I1216 13:59:31.088530 2843 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:31.094953 kubelet[2843]: I1216 13:59:31.093205 2843 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:31.131731 kubelet[2843]: W1216 13:59:31.131688 2843 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 16 13:59:31.132946 kubelet[2843]: W1216 13:59:31.132785 2843 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 16 13:59:31.133493 kubelet[2843]: E1216 13:59:31.132970 2843 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:31.133493 kubelet[2843]: E1216 13:59:31.132910 2843 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 13:59:31.159218 kubelet[2843]: I1216 13:59:31.158901 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" podStartSLOduration=1.1587234020000001 podStartE2EDuration="1.158723402s" podCreationTimestamp="2025-12-16 13:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:59:31.157171545 +0000 UTC m=+1.324514845" watchObservedRunningTime="2025-12-16 13:59:31.158723402 +0000 UTC m=+1.326066690" Dec 16 13:59:31.183294 kubelet[2843]: I1216 13:59:31.182962 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" podStartSLOduration=1.182797172 podStartE2EDuration="1.182797172s" podCreationTimestamp="2025-12-16 13:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:59:31.171037101 +0000 UTC m=+1.338380389" watchObservedRunningTime="2025-12-16 13:59:31.182797172 +0000 UTC m=+1.350140483" Dec 16 13:59:31.184354 kubelet[2843]: I1216 13:59:31.184226 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" podStartSLOduration=4.184193081 podStartE2EDuration="4.184193081s" podCreationTimestamp="2025-12-16 13:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:59:31.183988449 +0000 UTC m=+1.351331738" watchObservedRunningTime="2025-12-16 13:59:31.184193081 +0000 UTC m=+1.351536368" Dec 16 13:59:36.364237 kubelet[2843]: I1216 13:59:36.364182 2843 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:59:36.365708 containerd[1603]: time="2025-12-16T13:59:36.365632857Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:59:36.366768 kubelet[2843]: I1216 13:59:36.366587 2843 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:59:37.239046 kubelet[2843]: I1216 13:59:37.238998 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/670c3ed4-f8e7-4359-9d51-d741bcca7501-kube-proxy\") pod \"kube-proxy-cmcwm\" (UID: \"670c3ed4-f8e7-4359-9d51-d741bcca7501\") " pod="kube-system/kube-proxy-cmcwm" Dec 16 13:59:37.239219 kubelet[2843]: I1216 13:59:37.239050 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4pk5\" (UniqueName: \"kubernetes.io/projected/670c3ed4-f8e7-4359-9d51-d741bcca7501-kube-api-access-v4pk5\") pod \"kube-proxy-cmcwm\" (UID: \"670c3ed4-f8e7-4359-9d51-d741bcca7501\") " pod="kube-system/kube-proxy-cmcwm" Dec 16 13:59:37.239219 kubelet[2843]: I1216 13:59:37.239083 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/670c3ed4-f8e7-4359-9d51-d741bcca7501-xtables-lock\") pod \"kube-proxy-cmcwm\" (UID: \"670c3ed4-f8e7-4359-9d51-d741bcca7501\") " pod="kube-system/kube-proxy-cmcwm" Dec 16 13:59:37.239219 kubelet[2843]: I1216 13:59:37.239112 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/670c3ed4-f8e7-4359-9d51-d741bcca7501-lib-modules\") pod \"kube-proxy-cmcwm\" (UID: \"670c3ed4-f8e7-4359-9d51-d741bcca7501\") " pod="kube-system/kube-proxy-cmcwm" Dec 16 13:59:37.244182 kubelet[2843]: W1216 13:59:37.244089 2843 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal' and this object Dec 16 13:59:37.244182 kubelet[2843]: E1216 13:59:37.244142 2843 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal' and this object" logger="UnhandledError" Dec 16 13:59:37.246859 systemd[1]: Created slice kubepods-besteffort-pod670c3ed4_f8e7_4359_9d51_d741bcca7501.slice - libcontainer container kubepods-besteffort-pod670c3ed4_f8e7_4359_9d51_d741bcca7501.slice. Dec 16 13:59:37.481402 systemd[1]: Created slice kubepods-besteffort-poddf1aa2ef_b4fb_47f2_9a64_a4e994bd624b.slice - libcontainer container kubepods-besteffort-poddf1aa2ef_b4fb_47f2_9a64_a4e994bd624b.slice. Dec 16 13:59:37.541178 kubelet[2843]: I1216 13:59:37.541108 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/df1aa2ef-b4fb-47f2-9a64-a4e994bd624b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-sfjdv\" (UID: \"df1aa2ef-b4fb-47f2-9a64-a4e994bd624b\") " pod="tigera-operator/tigera-operator-7dcd859c48-sfjdv" Dec 16 13:59:37.541178 kubelet[2843]: I1216 13:59:37.541171 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxlkz\" (UniqueName: \"kubernetes.io/projected/df1aa2ef-b4fb-47f2-9a64-a4e994bd624b-kube-api-access-gxlkz\") pod \"tigera-operator-7dcd859c48-sfjdv\" (UID: \"df1aa2ef-b4fb-47f2-9a64-a4e994bd624b\") " pod="tigera-operator/tigera-operator-7dcd859c48-sfjdv" Dec 16 13:59:37.786216 containerd[1603]: time="2025-12-16T13:59:37.786156305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sfjdv,Uid:df1aa2ef-b4fb-47f2-9a64-a4e994bd624b,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:59:37.820272 containerd[1603]: time="2025-12-16T13:59:37.820085019Z" level=info msg="connecting to shim d02bebc69de30fe88dd38eeac932d60a64bd1b164c99cc89abbd5251cc32b768" address="unix:///run/containerd/s/edd61cf86713ddee66665e47ccf8953917da36cafb3ae5ff68bffc3819732818" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:59:37.857994 systemd[1]: Started cri-containerd-d02bebc69de30fe88dd38eeac932d60a64bd1b164c99cc89abbd5251cc32b768.scope - libcontainer container d02bebc69de30fe88dd38eeac932d60a64bd1b164c99cc89abbd5251cc32b768. Dec 16 13:59:37.876000 audit: BPF prog-id=136 op=LOAD Dec 16 13:59:37.882915 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 13:59:37.883053 kernel: audit: type=1334 audit(1765893577.876:421): prog-id=136 op=LOAD Dec 16 13:59:37.876000 audit: BPF prog-id=137 op=LOAD Dec 16 13:59:37.897110 kernel: audit: type=1334 audit(1765893577.876:422): prog-id=137 op=LOAD Dec 16 13:59:37.897199 kernel: audit: type=1300 audit(1765893577.876:422): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2926 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:37.876000 audit[2939]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2926 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:37.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326265626336396465333066653838646433386565616339333264 Dec 16 13:59:37.955647 kernel: audit: type=1327 audit(1765893577.876:422): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326265626336396465333066653838646433386565616339333264 Dec 16 13:59:37.956204 kernel: audit: type=1334 audit(1765893577.876:423): prog-id=137 op=UNLOAD Dec 16 13:59:37.876000 audit: BPF prog-id=137 op=UNLOAD Dec 16 13:59:37.876000 audit[2939]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:38.023176 kernel: audit: type=1300 audit(1765893577.876:423): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:38.023317 kernel: audit: type=1327 audit(1765893577.876:423): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326265626336396465333066653838646433386565616339333264 Dec 16 13:59:37.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326265626336396465333066653838646433386565616339333264 Dec 16 13:59:37.876000 audit: BPF prog-id=138 op=LOAD Dec 16 13:59:37.876000 audit[2939]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2926 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:38.061604 kernel: audit: type=1334 audit(1765893577.876:424): prog-id=138 op=LOAD Dec 16 13:59:38.061707 kernel: audit: type=1300 audit(1765893577.876:424): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2926 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:38.061787 kernel: audit: type=1327 audit(1765893577.876:424): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326265626336396465333066653838646433386565616339333264 Dec 16 13:59:37.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326265626336396465333066653838646433386565616339333264 Dec 16 13:59:37.876000 audit: BPF prog-id=139 op=LOAD Dec 16 13:59:37.876000 audit[2939]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2926 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:37.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326265626336396465333066653838646433386565616339333264 Dec 16 13:59:37.876000 audit: BPF prog-id=139 op=UNLOAD Dec 16 13:59:37.876000 audit[2939]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:37.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326265626336396465333066653838646433386565616339333264 Dec 16 13:59:37.876000 audit: BPF prog-id=138 op=UNLOAD Dec 16 13:59:37.876000 audit[2939]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:37.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326265626336396465333066653838646433386565616339333264 Dec 16 13:59:37.876000 audit: BPF prog-id=140 op=LOAD Dec 16 13:59:37.876000 audit[2939]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2926 pid=2939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:37.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326265626336396465333066653838646433386565616339333264 Dec 16 13:59:38.092465 containerd[1603]: time="2025-12-16T13:59:38.092331198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sfjdv,Uid:df1aa2ef-b4fb-47f2-9a64-a4e994bd624b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d02bebc69de30fe88dd38eeac932d60a64bd1b164c99cc89abbd5251cc32b768\"" Dec 16 13:59:38.096299 containerd[1603]: time="2025-12-16T13:59:38.096263142Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:59:38.340820 kubelet[2843]: E1216 13:59:38.340618 2843 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 16 13:59:38.340820 kubelet[2843]: E1216 13:59:38.340738 2843 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/670c3ed4-f8e7-4359-9d51-d741bcca7501-kube-proxy podName:670c3ed4-f8e7-4359-9d51-d741bcca7501 nodeName:}" failed. No retries permitted until 2025-12-16 13:59:38.840708897 +0000 UTC m=+9.008052175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/670c3ed4-f8e7-4359-9d51-d741bcca7501-kube-proxy") pod "kube-proxy-cmcwm" (UID: "670c3ed4-f8e7-4359-9d51-d741bcca7501") : failed to sync configmap cache: timed out waiting for the condition Dec 16 13:59:39.059302 containerd[1603]: time="2025-12-16T13:59:39.058538996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cmcwm,Uid:670c3ed4-f8e7-4359-9d51-d741bcca7501,Namespace:kube-system,Attempt:0,}" Dec 16 13:59:39.098438 containerd[1603]: time="2025-12-16T13:59:39.098379274Z" level=info msg="connecting to shim 644cad54f733eb660fe1743e4d0b6592d4d91ad64ec6fac5555b652d57ed610e" address="unix:///run/containerd/s/180f3ff72a988285362ebea3b17ea03052b52616a18085e48f4e3b956daa4dc9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:59:39.145023 systemd[1]: Started cri-containerd-644cad54f733eb660fe1743e4d0b6592d4d91ad64ec6fac5555b652d57ed610e.scope - libcontainer container 644cad54f733eb660fe1743e4d0b6592d4d91ad64ec6fac5555b652d57ed610e. Dec 16 13:59:39.175000 audit: BPF prog-id=141 op=LOAD Dec 16 13:59:39.175000 audit: BPF prog-id=142 op=LOAD Dec 16 13:59:39.175000 audit[2984]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2973 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634346361643534663733336562363630666531373433653464306236 Dec 16 13:59:39.176000 audit: BPF prog-id=142 op=UNLOAD Dec 16 13:59:39.176000 audit[2984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2973 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634346361643534663733336562363630666531373433653464306236 Dec 16 13:59:39.176000 audit: BPF prog-id=143 op=LOAD Dec 16 13:59:39.176000 audit[2984]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2973 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634346361643534663733336562363630666531373433653464306236 Dec 16 13:59:39.177000 audit: BPF prog-id=144 op=LOAD Dec 16 13:59:39.177000 audit[2984]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2973 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634346361643534663733336562363630666531373433653464306236 Dec 16 13:59:39.177000 audit: BPF prog-id=144 op=UNLOAD Dec 16 13:59:39.177000 audit[2984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2973 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634346361643534663733336562363630666531373433653464306236 Dec 16 13:59:39.177000 audit: BPF prog-id=143 op=UNLOAD Dec 16 13:59:39.177000 audit[2984]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2973 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634346361643534663733336562363630666531373433653464306236 Dec 16 13:59:39.178000 audit: BPF prog-id=145 op=LOAD Dec 16 13:59:39.178000 audit[2984]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2973 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634346361643534663733336562363630666531373433653464306236 Dec 16 13:59:39.214382 containerd[1603]: time="2025-12-16T13:59:39.214321184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cmcwm,Uid:670c3ed4-f8e7-4359-9d51-d741bcca7501,Namespace:kube-system,Attempt:0,} returns sandbox id \"644cad54f733eb660fe1743e4d0b6592d4d91ad64ec6fac5555b652d57ed610e\"" Dec 16 13:59:39.219664 containerd[1603]: time="2025-12-16T13:59:39.219436024Z" level=info msg="CreateContainer within sandbox \"644cad54f733eb660fe1743e4d0b6592d4d91ad64ec6fac5555b652d57ed610e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:59:39.259236 containerd[1603]: time="2025-12-16T13:59:39.258680272Z" level=info msg="Container b4ebb3389afe40329c03ccf5554e5abbb1f036b8dd2ce5e209ef1665f620cb4f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:59:39.264342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1976700683.mount: Deactivated successfully. Dec 16 13:59:39.274640 containerd[1603]: time="2025-12-16T13:59:39.274548324Z" level=info msg="CreateContainer within sandbox \"644cad54f733eb660fe1743e4d0b6592d4d91ad64ec6fac5555b652d57ed610e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b4ebb3389afe40329c03ccf5554e5abbb1f036b8dd2ce5e209ef1665f620cb4f\"" Dec 16 13:59:39.278112 containerd[1603]: time="2025-12-16T13:59:39.276001977Z" level=info msg="StartContainer for \"b4ebb3389afe40329c03ccf5554e5abbb1f036b8dd2ce5e209ef1665f620cb4f\"" Dec 16 13:59:39.283415 containerd[1603]: time="2025-12-16T13:59:39.283361950Z" level=info msg="connecting to shim b4ebb3389afe40329c03ccf5554e5abbb1f036b8dd2ce5e209ef1665f620cb4f" address="unix:///run/containerd/s/180f3ff72a988285362ebea3b17ea03052b52616a18085e48f4e3b956daa4dc9" protocol=ttrpc version=3 Dec 16 13:59:39.329464 systemd[1]: Started cri-containerd-b4ebb3389afe40329c03ccf5554e5abbb1f036b8dd2ce5e209ef1665f620cb4f.scope - libcontainer container b4ebb3389afe40329c03ccf5554e5abbb1f036b8dd2ce5e209ef1665f620cb4f. Dec 16 13:59:39.404000 audit: BPF prog-id=146 op=LOAD Dec 16 13:59:39.404000 audit[3014]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2973 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234656262333338396166653430333239633033636366353535346535 Dec 16 13:59:39.405000 audit: BPF prog-id=147 op=LOAD Dec 16 13:59:39.405000 audit[3014]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2973 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234656262333338396166653430333239633033636366353535346535 Dec 16 13:59:39.406000 audit: BPF prog-id=147 op=UNLOAD Dec 16 13:59:39.406000 audit[3014]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2973 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234656262333338396166653430333239633033636366353535346535 Dec 16 13:59:39.406000 audit: BPF prog-id=146 op=UNLOAD Dec 16 13:59:39.406000 audit[3014]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2973 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234656262333338396166653430333239633033636366353535346535 Dec 16 13:59:39.406000 audit: BPF prog-id=148 op=LOAD Dec 16 13:59:39.406000 audit[3014]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2973 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234656262333338396166653430333239633033636366353535346535 Dec 16 13:59:39.470064 containerd[1603]: time="2025-12-16T13:59:39.469625350Z" level=info msg="StartContainer for \"b4ebb3389afe40329c03ccf5554e5abbb1f036b8dd2ce5e209ef1665f620cb4f\" returns successfully" Dec 16 13:59:39.685000 audit[3080]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:39.685000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2f93da60 a2=0 a3=7ffc2f93da4c items=0 ppid=3030 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 13:59:39.698000 audit[3083]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.698000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffffb6816a0 a2=0 a3=7ffffb68168c items=0 ppid=3030 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.698000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 13:59:39.702000 audit[3082]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:39.702000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa48b7e50 a2=0 a3=7fffa48b7e3c items=0 ppid=3030 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.702000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 13:59:39.707000 audit[3086]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.707000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd88de4320 a2=0 a3=7ffd88de430c items=0 ppid=3030 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 13:59:39.708000 audit[3087]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:39.708000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe154f91f0 a2=0 a3=7ffe154f91dc items=0 ppid=3030 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.708000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 13:59:39.711000 audit[3088]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.711000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9d844010 a2=0 a3=7ffd9d843ffc items=0 ppid=3030 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.711000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 13:59:39.792000 audit[3089]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.792000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdcac57700 a2=0 a3=7ffdcac576ec items=0 ppid=3030 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.792000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 13:59:39.800000 audit[3091]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.800000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc0c1b5c40 a2=0 a3=7ffc0c1b5c2c items=0 ppid=3030 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 13:59:39.808000 audit[3094]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.808000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff1fa82fb0 a2=0 a3=7fff1fa82f9c items=0 ppid=3030 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 13:59:39.810000 audit[3095]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.810000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd36d43700 a2=0 a3=7ffd36d436ec items=0 ppid=3030 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 13:59:39.816000 audit[3097]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.816000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdafe4d650 a2=0 a3=7ffdafe4d63c items=0 ppid=3030 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.816000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 13:59:39.819000 audit[3098]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.819000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcca3bc160 a2=0 a3=7ffcca3bc14c items=0 ppid=3030 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 13:59:39.825000 audit[3100]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.825000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc2d432c20 a2=0 a3=7ffc2d432c0c items=0 ppid=3030 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 13:59:39.840000 audit[3103]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.840000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffec75a15c0 a2=0 a3=7ffec75a15ac items=0 ppid=3030 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.840000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 13:59:39.843000 audit[3104]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.843000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe31f65d50 a2=0 a3=7ffe31f65d3c items=0 ppid=3030 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.843000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 13:59:39.849000 audit[3106]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.849000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcddc71d90 a2=0 a3=7ffcddc71d7c items=0 ppid=3030 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 13:59:39.853000 audit[3107]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.853000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde2fd4470 a2=0 a3=7ffde2fd445c items=0 ppid=3030 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.853000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 13:59:39.858000 audit[3109]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.858000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe3ce6a30 a2=0 a3=7fffe3ce6a1c items=0 ppid=3030 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 13:59:39.870000 audit[3112]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.870000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc88b7e490 a2=0 a3=7ffc88b7e47c items=0 ppid=3030 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.870000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 13:59:39.883000 audit[3115]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.883000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe68f72140 a2=0 a3=7ffe68f7212c items=0 ppid=3030 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.883000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 13:59:39.885000 audit[3116]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.885000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff93b52f80 a2=0 a3=7fff93b52f6c items=0 ppid=3030 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.885000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 13:59:39.892000 audit[3118]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.892000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff72e12a20 a2=0 a3=7fff72e12a0c items=0 ppid=3030 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.892000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:59:39.903000 audit[3121]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.903000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcff285ad0 a2=0 a3=7ffcff285abc items=0 ppid=3030 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.903000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:59:39.907000 audit[3122]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.907000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8f1f8580 a2=0 a3=7ffd8f1f856c items=0 ppid=3030 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.907000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 13:59:39.914000 audit[3124]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 13:59:39.914000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc0718bab0 a2=0 a3=7ffc0718ba9c items=0 ppid=3030 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.914000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 13:59:39.969000 audit[3130]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:39.969000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffca6845310 a2=0 a3=7ffca68452fc items=0 ppid=3030 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.969000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:39.980000 audit[3130]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:39.980000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffca6845310 a2=0 a3=7ffca68452fc items=0 ppid=3030 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:39.985000 audit[3135]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3135 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:39.985000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc9efa0590 a2=0 a3=7ffc9efa057c items=0 ppid=3030 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.985000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 13:59:39.993000 audit[3137]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3137 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:39.993000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd571b5ce0 a2=0 a3=7ffd571b5ccc items=0 ppid=3030 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:39.993000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 13:59:40.009000 audit[3140]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.009000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc76a6d2c0 a2=0 a3=7ffc76a6d2ac items=0 ppid=3030 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.009000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 13:59:40.014000 audit[3141]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.014000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe63b05870 a2=0 a3=7ffe63b0585c items=0 ppid=3030 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.014000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 13:59:40.021000 audit[3143]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.021000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff4ae1de20 a2=0 a3=7fff4ae1de0c items=0 ppid=3030 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.021000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 13:59:40.025000 audit[3144]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.025000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb8b29e30 a2=0 a3=7ffdb8b29e1c items=0 ppid=3030 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.025000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 13:59:40.036000 audit[3146]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.036000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff3e517bb0 a2=0 a3=7fff3e517b9c items=0 ppid=3030 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.036000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 13:59:40.064000 audit[3149]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.064000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffff7c91aa0 a2=0 a3=7ffff7c91a8c items=0 ppid=3030 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.064000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 13:59:40.068000 audit[3150]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.068000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc80739300 a2=0 a3=7ffc807392ec items=0 ppid=3030 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 13:59:40.087810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878505189.mount: Deactivated successfully. Dec 16 13:59:40.087000 audit[3152]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.087000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc5ab79610 a2=0 a3=7ffc5ab795fc items=0 ppid=3030 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.087000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 13:59:40.104000 audit[3153]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.104000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfa9794d0 a2=0 a3=7ffdfa9794bc items=0 ppid=3030 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 13:59:40.117000 audit[3155]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.117000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcaf424f80 a2=0 a3=7ffcaf424f6c items=0 ppid=3030 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.117000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 13:59:40.130000 audit[3158]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.130000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcac039f00 a2=0 a3=7ffcac039eec items=0 ppid=3030 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.130000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 13:59:40.148000 audit[3161]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.148000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff9853afe0 a2=0 a3=7fff9853afcc items=0 ppid=3030 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.148000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 13:59:40.163542 kubelet[2843]: I1216 13:59:40.163472 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cmcwm" podStartSLOduration=3.163429568 podStartE2EDuration="3.163429568s" podCreationTimestamp="2025-12-16 13:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:59:40.160811172 +0000 UTC m=+10.328154464" watchObservedRunningTime="2025-12-16 13:59:40.163429568 +0000 UTC m=+10.330772855" Dec 16 13:59:40.164000 audit[3162]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.164000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffff81c2660 a2=0 a3=7ffff81c264c items=0 ppid=3030 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.164000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 13:59:40.184000 audit[3164]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.184000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd241f1db0 a2=0 a3=7ffd241f1d9c items=0 ppid=3030 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.184000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:59:40.198000 audit[3167]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.198000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd9035b1d0 a2=0 a3=7ffd9035b1bc items=0 ppid=3030 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.198000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 13:59:40.202000 audit[3168]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.202000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea3842e50 a2=0 a3=7ffea3842e3c items=0 ppid=3030 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.202000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 13:59:40.222000 audit[3170]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.222000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc61a7c810 a2=0 a3=7ffc61a7c7fc items=0 ppid=3030 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.222000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 13:59:40.225000 audit[3171]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.225000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe976ae960 a2=0 a3=7ffe976ae94c items=0 ppid=3030 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.225000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 13:59:40.231000 audit[3173]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.231000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff8e884920 a2=0 a3=7fff8e88490c items=0 ppid=3030 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.231000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:59:40.240000 audit[3176]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 13:59:40.240000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf1664890 a2=0 a3=7ffcf166487c items=0 ppid=3030 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.240000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 13:59:40.247000 audit[3178]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:59:40.247000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffde3b8fc80 a2=0 a3=7ffde3b8fc6c items=0 ppid=3030 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.247000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:40.249000 audit[3178]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 13:59:40.249000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffde3b8fc80 a2=0 a3=7ffde3b8fc6c items=0 ppid=3030 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.249000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:40.691652 containerd[1603]: time="2025-12-16T13:59:40.691573655Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:40.692933 containerd[1603]: time="2025-12-16T13:59:40.692884524Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 13:59:40.694577 containerd[1603]: time="2025-12-16T13:59:40.694498524Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:40.697613 containerd[1603]: time="2025-12-16T13:59:40.697542401Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:40.699068 containerd[1603]: time="2025-12-16T13:59:40.698462920Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.601819094s" Dec 16 13:59:40.699068 containerd[1603]: time="2025-12-16T13:59:40.698509001Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:59:40.704448 containerd[1603]: time="2025-12-16T13:59:40.702854039Z" level=info msg="CreateContainer within sandbox \"d02bebc69de30fe88dd38eeac932d60a64bd1b164c99cc89abbd5251cc32b768\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:59:40.714331 containerd[1603]: time="2025-12-16T13:59:40.714282794Z" level=info msg="Container 2a0c1c1806d4f30585d17ded5b1b0731af034332469498b5db616b1251298f93: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:59:40.727927 containerd[1603]: time="2025-12-16T13:59:40.727876180Z" level=info msg="CreateContainer within sandbox \"d02bebc69de30fe88dd38eeac932d60a64bd1b164c99cc89abbd5251cc32b768\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2a0c1c1806d4f30585d17ded5b1b0731af034332469498b5db616b1251298f93\"" Dec 16 13:59:40.728921 containerd[1603]: time="2025-12-16T13:59:40.728873857Z" level=info msg="StartContainer for \"2a0c1c1806d4f30585d17ded5b1b0731af034332469498b5db616b1251298f93\"" Dec 16 13:59:40.730417 containerd[1603]: time="2025-12-16T13:59:40.730379109Z" level=info msg="connecting to shim 2a0c1c1806d4f30585d17ded5b1b0731af034332469498b5db616b1251298f93" address="unix:///run/containerd/s/edd61cf86713ddee66665e47ccf8953917da36cafb3ae5ff68bffc3819732818" protocol=ttrpc version=3 Dec 16 13:59:40.763051 systemd[1]: Started cri-containerd-2a0c1c1806d4f30585d17ded5b1b0731af034332469498b5db616b1251298f93.scope - libcontainer container 2a0c1c1806d4f30585d17ded5b1b0731af034332469498b5db616b1251298f93. Dec 16 13:59:40.781000 audit: BPF prog-id=149 op=LOAD Dec 16 13:59:40.782000 audit: BPF prog-id=150 op=LOAD Dec 16 13:59:40.782000 audit[3179]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2926 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261306331633138303664346633303538356431376465643562316230 Dec 16 13:59:40.783000 audit: BPF prog-id=150 op=UNLOAD Dec 16 13:59:40.783000 audit[3179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261306331633138303664346633303538356431376465643562316230 Dec 16 13:59:40.783000 audit: BPF prog-id=151 op=LOAD Dec 16 13:59:40.783000 audit[3179]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2926 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261306331633138303664346633303538356431376465643562316230 Dec 16 13:59:40.783000 audit: BPF prog-id=152 op=LOAD Dec 16 13:59:40.783000 audit[3179]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2926 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261306331633138303664346633303538356431376465643562316230 Dec 16 13:59:40.783000 audit: BPF prog-id=152 op=UNLOAD Dec 16 13:59:40.783000 audit[3179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261306331633138303664346633303538356431376465643562316230 Dec 16 13:59:40.783000 audit: BPF prog-id=151 op=UNLOAD Dec 16 13:59:40.783000 audit[3179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2926 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261306331633138303664346633303538356431376465643562316230 Dec 16 13:59:40.783000 audit: BPF prog-id=153 op=LOAD Dec 16 13:59:40.783000 audit[3179]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2926 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:40.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261306331633138303664346633303538356431376465643562316230 Dec 16 13:59:40.816807 containerd[1603]: time="2025-12-16T13:59:40.816598018Z" level=info msg="StartContainer for \"2a0c1c1806d4f30585d17ded5b1b0731af034332469498b5db616b1251298f93\" returns successfully" Dec 16 13:59:41.156884 kubelet[2843]: I1216 13:59:41.156503 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-sfjdv" podStartSLOduration=1.5508533770000001 podStartE2EDuration="4.156481328s" podCreationTimestamp="2025-12-16 13:59:37 +0000 UTC" firstStartedPulling="2025-12-16 13:59:38.094473878 +0000 UTC m=+8.261817153" lastFinishedPulling="2025-12-16 13:59:40.700101823 +0000 UTC m=+10.867445104" observedRunningTime="2025-12-16 13:59:41.156290653 +0000 UTC m=+11.323633958" watchObservedRunningTime="2025-12-16 13:59:41.156481328 +0000 UTC m=+11.323824622" Dec 16 13:59:48.314389 sudo[1921]: pam_unix(sudo:session): session closed for user root Dec 16 13:59:48.313000 audit[1921]: USER_END pid=1921 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:59:48.320006 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 13:59:48.320134 kernel: audit: type=1106 audit(1765893588.313:501): pid=1921 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:59:48.343000 audit[1921]: CRED_DISP pid=1921 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:59:48.369171 kernel: audit: type=1104 audit(1765893588.343:502): pid=1921 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 13:59:48.412667 sshd[1920]: Connection closed by 139.178.68.195 port 52094 Dec 16 13:59:48.409569 sshd-session[1916]: pam_unix(sshd:session): session closed for user core Dec 16 13:59:48.414000 audit[1916]: USER_END pid=1916 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:59:48.420443 systemd[1]: sshd@8-10.128.0.79:22-139.178.68.195:52094.service: Deactivated successfully. Dec 16 13:59:48.426250 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:59:48.427246 systemd[1]: session-10.scope: Consumed 6.737s CPU time, 232.1M memory peak. Dec 16 13:59:48.430337 systemd-logind[1569]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:59:48.437000 systemd-logind[1569]: Removed session 10. Dec 16 13:59:48.414000 audit[1916]: CRED_DISP pid=1916 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:59:48.480007 kernel: audit: type=1106 audit(1765893588.414:503): pid=1916 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:59:48.480107 kernel: audit: type=1104 audit(1765893588.414:504): pid=1916 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 13:59:48.480164 kernel: audit: type=1131 audit(1765893588.420:505): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.79:22-139.178.68.195:52094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:48.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.79:22-139.178.68.195:52094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 13:59:49.993000 audit[3259]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:50.014885 kernel: audit: type=1325 audit(1765893589.993:506): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:49.993000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd0a8ec180 a2=0 a3=7ffd0a8ec16c items=0 ppid=3030 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:50.065208 kernel: audit: type=1300 audit(1765893589.993:506): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd0a8ec180 a2=0 a3=7ffd0a8ec16c items=0 ppid=3030 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:50.084096 kernel: audit: type=1327 audit(1765893589.993:506): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:49.993000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:50.016000 audit[3259]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:50.100790 kernel: audit: type=1325 audit(1765893590.016:507): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:50.016000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd0a8ec180 a2=0 a3=0 items=0 ppid=3030 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:50.016000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:50.133780 kernel: audit: type=1300 audit(1765893590.016:507): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd0a8ec180 a2=0 a3=0 items=0 ppid=3030 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:50.217000 audit[3261]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:50.217000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcb3056c90 a2=0 a3=7ffcb3056c7c items=0 ppid=3030 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:50.217000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:50.247000 audit[3261]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:50.247000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcb3056c90 a2=0 a3=0 items=0 ppid=3030 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:50.247000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:53.563582 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 13:59:53.565577 kernel: audit: type=1325 audit(1765893593.540:510): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:53.540000 audit[3267]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:53.540000 audit[3267]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe4fb62540 a2=0 a3=7ffe4fb6252c items=0 ppid=3030 pid=3267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:53.606769 kernel: audit: type=1300 audit(1765893593.540:510): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe4fb62540 a2=0 a3=7ffe4fb6252c items=0 ppid=3030 pid=3267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:53.540000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:53.626864 kernel: audit: type=1327 audit(1765893593.540:510): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:53.632000 audit[3267]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:53.649801 kernel: audit: type=1325 audit(1765893593.632:511): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:53.632000 audit[3267]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4fb62540 a2=0 a3=0 items=0 ppid=3030 pid=3267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:53.683815 kernel: audit: type=1300 audit(1765893593.632:511): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4fb62540 a2=0 a3=0 items=0 ppid=3030 pid=3267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:53.632000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:53.700770 kernel: audit: type=1327 audit(1765893593.632:511): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:53.713000 audit[3269]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3269 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:53.764728 kernel: audit: type=1325 audit(1765893593.713:512): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3269 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:53.764965 kernel: audit: type=1300 audit(1765893593.713:512): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd06f4fcb0 a2=0 a3=7ffd06f4fc9c items=0 ppid=3030 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:53.713000 audit[3269]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd06f4fcb0 a2=0 a3=7ffd06f4fc9c items=0 ppid=3030 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:53.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:53.781796 kernel: audit: type=1327 audit(1765893593.713:512): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:53.785000 audit[3269]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3269 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:53.785000 audit[3269]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd06f4fcb0 a2=0 a3=0 items=0 ppid=3030 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:53.785000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:53.802813 kernel: audit: type=1325 audit(1765893593.785:513): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3269 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:54.815000 audit[3271]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3271 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:54.815000 audit[3271]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffea6c41350 a2=0 a3=7ffea6c4133c items=0 ppid=3030 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:54.815000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:54.821000 audit[3271]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3271 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:54.821000 audit[3271]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea6c41350 a2=0 a3=0 items=0 ppid=3030 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:54.821000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:56.212000 audit[3273]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3273 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:56.212000 audit[3273]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe7a69b630 a2=0 a3=7ffe7a69b61c items=0 ppid=3030 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:56.219000 audit[3273]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3273 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:56.219000 audit[3273]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe7a69b630 a2=0 a3=0 items=0 ppid=3030 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.219000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:56.252953 systemd[1]: Created slice kubepods-besteffort-pod97086076_4ef6_42a5_a50e_66451105c1cc.slice - libcontainer container kubepods-besteffort-pod97086076_4ef6_42a5_a50e_66451105c1cc.slice. Dec 16 13:59:56.429636 systemd[1]: Created slice kubepods-besteffort-pod129c5b98_2fc5_42a9_b073_2d0f446af2df.slice - libcontainer container kubepods-besteffort-pod129c5b98_2fc5_42a9_b073_2d0f446af2df.slice. Dec 16 13:59:56.431871 kubelet[2843]: I1216 13:59:56.430949 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/97086076-4ef6-42a5-a50e-66451105c1cc-typha-certs\") pod \"calico-typha-796cc8989b-jnp8z\" (UID: \"97086076-4ef6-42a5-a50e-66451105c1cc\") " pod="calico-system/calico-typha-796cc8989b-jnp8z" Dec 16 13:59:56.431871 kubelet[2843]: I1216 13:59:56.431160 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zq4r\" (UniqueName: \"kubernetes.io/projected/97086076-4ef6-42a5-a50e-66451105c1cc-kube-api-access-4zq4r\") pod \"calico-typha-796cc8989b-jnp8z\" (UID: \"97086076-4ef6-42a5-a50e-66451105c1cc\") " pod="calico-system/calico-typha-796cc8989b-jnp8z" Dec 16 13:59:56.431871 kubelet[2843]: I1216 13:59:56.431320 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97086076-4ef6-42a5-a50e-66451105c1cc-tigera-ca-bundle\") pod \"calico-typha-796cc8989b-jnp8z\" (UID: \"97086076-4ef6-42a5-a50e-66451105c1cc\") " pod="calico-system/calico-typha-796cc8989b-jnp8z" Dec 16 13:59:56.531888 kubelet[2843]: I1216 13:59:56.531831 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/129c5b98-2fc5-42a9-b073-2d0f446af2df-var-run-calico\") pod \"calico-node-7829c\" (UID: \"129c5b98-2fc5-42a9-b073-2d0f446af2df\") " pod="calico-system/calico-node-7829c" Dec 16 13:59:56.531888 kubelet[2843]: I1216 13:59:56.531892 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/129c5b98-2fc5-42a9-b073-2d0f446af2df-policysync\") pod \"calico-node-7829c\" (UID: \"129c5b98-2fc5-42a9-b073-2d0f446af2df\") " pod="calico-system/calico-node-7829c" Dec 16 13:59:56.532166 kubelet[2843]: I1216 13:59:56.531923 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/129c5b98-2fc5-42a9-b073-2d0f446af2df-var-lib-calico\") pod \"calico-node-7829c\" (UID: \"129c5b98-2fc5-42a9-b073-2d0f446af2df\") " pod="calico-system/calico-node-7829c" Dec 16 13:59:56.532166 kubelet[2843]: I1216 13:59:56.531953 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/129c5b98-2fc5-42a9-b073-2d0f446af2df-xtables-lock\") pod \"calico-node-7829c\" (UID: \"129c5b98-2fc5-42a9-b073-2d0f446af2df\") " pod="calico-system/calico-node-7829c" Dec 16 13:59:56.532166 kubelet[2843]: I1216 13:59:56.531980 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlhhp\" (UniqueName: \"kubernetes.io/projected/129c5b98-2fc5-42a9-b073-2d0f446af2df-kube-api-access-xlhhp\") pod \"calico-node-7829c\" (UID: \"129c5b98-2fc5-42a9-b073-2d0f446af2df\") " pod="calico-system/calico-node-7829c" Dec 16 13:59:56.532166 kubelet[2843]: I1216 13:59:56.532007 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/129c5b98-2fc5-42a9-b073-2d0f446af2df-cni-net-dir\") pod \"calico-node-7829c\" (UID: \"129c5b98-2fc5-42a9-b073-2d0f446af2df\") " pod="calico-system/calico-node-7829c" Dec 16 13:59:56.532166 kubelet[2843]: I1216 13:59:56.532031 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/129c5b98-2fc5-42a9-b073-2d0f446af2df-flexvol-driver-host\") pod \"calico-node-7829c\" (UID: \"129c5b98-2fc5-42a9-b073-2d0f446af2df\") " pod="calico-system/calico-node-7829c" Dec 16 13:59:56.532406 kubelet[2843]: I1216 13:59:56.532054 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/129c5b98-2fc5-42a9-b073-2d0f446af2df-tigera-ca-bundle\") pod \"calico-node-7829c\" (UID: \"129c5b98-2fc5-42a9-b073-2d0f446af2df\") " pod="calico-system/calico-node-7829c" Dec 16 13:59:56.532406 kubelet[2843]: I1216 13:59:56.532082 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/129c5b98-2fc5-42a9-b073-2d0f446af2df-cni-log-dir\") pod \"calico-node-7829c\" (UID: \"129c5b98-2fc5-42a9-b073-2d0f446af2df\") " pod="calico-system/calico-node-7829c" Dec 16 13:59:56.532406 kubelet[2843]: I1216 13:59:56.532111 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/129c5b98-2fc5-42a9-b073-2d0f446af2df-node-certs\") pod \"calico-node-7829c\" (UID: \"129c5b98-2fc5-42a9-b073-2d0f446af2df\") " pod="calico-system/calico-node-7829c" Dec 16 13:59:56.532406 kubelet[2843]: I1216 13:59:56.532140 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/129c5b98-2fc5-42a9-b073-2d0f446af2df-cni-bin-dir\") pod \"calico-node-7829c\" (UID: \"129c5b98-2fc5-42a9-b073-2d0f446af2df\") " pod="calico-system/calico-node-7829c" Dec 16 13:59:56.532406 kubelet[2843]: I1216 13:59:56.532182 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/129c5b98-2fc5-42a9-b073-2d0f446af2df-lib-modules\") pod \"calico-node-7829c\" (UID: \"129c5b98-2fc5-42a9-b073-2d0f446af2df\") " pod="calico-system/calico-node-7829c" Dec 16 13:59:56.559038 containerd[1603]: time="2025-12-16T13:59:56.558293511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-796cc8989b-jnp8z,Uid:97086076-4ef6-42a5-a50e-66451105c1cc,Namespace:calico-system,Attempt:0,}" Dec 16 13:59:56.596520 containerd[1603]: time="2025-12-16T13:59:56.596229110Z" level=info msg="connecting to shim 4c989b01f2bc5f22cef7468f0ed411ac80258b8214949b05bc49e1e9394fe7f6" address="unix:///run/containerd/s/aa54ca9727c2b4ae51352b8fe0ef2e5858cda567bed9e6aab1f50804a9b3d728" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:59:56.646927 kubelet[2843]: E1216 13:59:56.646887 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.647767 kubelet[2843]: W1216 13:59:56.647661 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.648056 kubelet[2843]: E1216 13:59:56.647932 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.648504 kubelet[2843]: E1216 13:59:56.648478 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.648652 kubelet[2843]: W1216 13:59:56.648617 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.648871 kubelet[2843]: E1216 13:59:56.648814 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.649399 kubelet[2843]: E1216 13:59:56.649375 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.649511 kubelet[2843]: W1216 13:59:56.649494 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.649615 kubelet[2843]: E1216 13:59:56.649597 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.650121 kubelet[2843]: E1216 13:59:56.650098 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.650242 kubelet[2843]: W1216 13:59:56.650222 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.650407 kubelet[2843]: E1216 13:59:56.650383 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.679026 kubelet[2843]: E1216 13:59:56.674829 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.679026 kubelet[2843]: W1216 13:59:56.674857 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.679026 kubelet[2843]: E1216 13:59:56.674881 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.681805 kubelet[2843]: E1216 13:59:56.679661 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 13:59:56.708029 systemd[1]: Started cri-containerd-4c989b01f2bc5f22cef7468f0ed411ac80258b8214949b05bc49e1e9394fe7f6.scope - libcontainer container 4c989b01f2bc5f22cef7468f0ed411ac80258b8214949b05bc49e1e9394fe7f6. Dec 16 13:59:56.711158 kubelet[2843]: E1216 13:59:56.711113 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.711158 kubelet[2843]: W1216 13:59:56.711138 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.711158 kubelet[2843]: E1216 13:59:56.711161 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.738060 kubelet[2843]: E1216 13:59:56.738006 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.738060 kubelet[2843]: W1216 13:59:56.738031 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.738405 kubelet[2843]: E1216 13:59:56.738286 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.739332 kubelet[2843]: E1216 13:59:56.738805 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.739332 kubelet[2843]: W1216 13:59:56.738825 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.739567 kubelet[2843]: E1216 13:59:56.739506 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.740003 kubelet[2843]: E1216 13:59:56.739970 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.740003 kubelet[2843]: W1216 13:59:56.739992 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.740210 kubelet[2843]: E1216 13:59:56.740010 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.740381 kubelet[2843]: E1216 13:59:56.740361 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.740462 kubelet[2843]: W1216 13:59:56.740382 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.740462 kubelet[2843]: E1216 13:59:56.740400 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.742177 kubelet[2843]: E1216 13:59:56.742141 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.742177 kubelet[2843]: W1216 13:59:56.742163 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.742429 kubelet[2843]: E1216 13:59:56.742183 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.742699 kubelet[2843]: E1216 13:59:56.742658 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.742699 kubelet[2843]: W1216 13:59:56.742679 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.742699 kubelet[2843]: E1216 13:59:56.742697 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.743997 kubelet[2843]: E1216 13:59:56.743968 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.743997 kubelet[2843]: W1216 13:59:56.743992 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.744156 kubelet[2843]: E1216 13:59:56.744010 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.744421 kubelet[2843]: E1216 13:59:56.744291 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.744421 kubelet[2843]: W1216 13:59:56.744309 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.744421 kubelet[2843]: E1216 13:59:56.744325 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.745864 kubelet[2843]: E1216 13:59:56.745835 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.745864 kubelet[2843]: W1216 13:59:56.745857 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.746010 kubelet[2843]: E1216 13:59:56.745875 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.746010 kubelet[2843]: I1216 13:59:56.745908 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fff9659c-3470-45ea-9613-14efa791d03c-registration-dir\") pod \"csi-node-driver-7s56z\" (UID: \"fff9659c-3470-45ea-9613-14efa791d03c\") " pod="calico-system/csi-node-driver-7s56z" Dec 16 13:59:56.746530 containerd[1603]: time="2025-12-16T13:59:56.746486269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7829c,Uid:129c5b98-2fc5-42a9-b073-2d0f446af2df,Namespace:calico-system,Attempt:0,}" Dec 16 13:59:56.747087 kubelet[2843]: E1216 13:59:56.747049 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.747087 kubelet[2843]: W1216 13:59:56.747086 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.747486 kubelet[2843]: E1216 13:59:56.747109 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.747486 kubelet[2843]: I1216 13:59:56.747143 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fff9659c-3470-45ea-9613-14efa791d03c-kubelet-dir\") pod \"csi-node-driver-7s56z\" (UID: \"fff9659c-3470-45ea-9613-14efa791d03c\") " pod="calico-system/csi-node-driver-7s56z" Dec 16 13:59:56.748002 kubelet[2843]: E1216 13:59:56.747951 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.748002 kubelet[2843]: W1216 13:59:56.747976 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.748002 kubelet[2843]: E1216 13:59:56.748001 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.748937 kubelet[2843]: E1216 13:59:56.748918 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.748937 kubelet[2843]: W1216 13:59:56.748936 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.749257 kubelet[2843]: E1216 13:59:56.749180 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.751034 kubelet[2843]: E1216 13:59:56.751004 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.751034 kubelet[2843]: W1216 13:59:56.751030 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.751164 kubelet[2843]: E1216 13:59:56.751132 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.753288 kubelet[2843]: E1216 13:59:56.752024 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.753288 kubelet[2843]: W1216 13:59:56.752044 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.753288 kubelet[2843]: E1216 13:59:56.752149 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.753288 kubelet[2843]: E1216 13:59:56.752382 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.753288 kubelet[2843]: W1216 13:59:56.752394 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.753288 kubelet[2843]: E1216 13:59:56.752484 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.753288 kubelet[2843]: E1216 13:59:56.752975 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.753288 kubelet[2843]: W1216 13:59:56.752990 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.753288 kubelet[2843]: E1216 13:59:56.753026 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.754196 kubelet[2843]: E1216 13:59:56.753872 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.754196 kubelet[2843]: W1216 13:59:56.753890 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.754196 kubelet[2843]: E1216 13:59:56.753936 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.754527 kubelet[2843]: E1216 13:59:56.754260 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.754527 kubelet[2843]: W1216 13:59:56.754273 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.754527 kubelet[2843]: E1216 13:59:56.754289 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.755322 kubelet[2843]: E1216 13:59:56.755289 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.755322 kubelet[2843]: W1216 13:59:56.755310 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.755322 kubelet[2843]: E1216 13:59:56.755327 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.755868 kubelet[2843]: E1216 13:59:56.755836 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.755868 kubelet[2843]: W1216 13:59:56.755860 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.756011 kubelet[2843]: E1216 13:59:56.755877 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.756657 kubelet[2843]: E1216 13:59:56.756633 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.756657 kubelet[2843]: W1216 13:59:56.756656 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.757154 kubelet[2843]: E1216 13:59:56.756672 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.757154 kubelet[2843]: E1216 13:59:56.756994 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.757154 kubelet[2843]: W1216 13:59:56.757008 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.757154 kubelet[2843]: E1216 13:59:56.757024 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.758022 kubelet[2843]: E1216 13:59:56.758001 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.758022 kubelet[2843]: W1216 13:59:56.758020 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.758538 kubelet[2843]: E1216 13:59:56.758039 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.758538 kubelet[2843]: E1216 13:59:56.758345 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.758538 kubelet[2843]: W1216 13:59:56.758359 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.758538 kubelet[2843]: E1216 13:59:56.758375 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.759317 kubelet[2843]: E1216 13:59:56.758664 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.759317 kubelet[2843]: W1216 13:59:56.758681 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.759317 kubelet[2843]: E1216 13:59:56.758698 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.759476 kubelet[2843]: E1216 13:59:56.759324 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.759476 kubelet[2843]: W1216 13:59:56.759340 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.759476 kubelet[2843]: E1216 13:59:56.759356 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.765000 audit: BPF prog-id=154 op=LOAD Dec 16 13:59:56.767000 audit: BPF prog-id=155 op=LOAD Dec 16 13:59:56.767000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3283 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463393839623031663262633566323263656637343638663065643431 Dec 16 13:59:56.767000 audit: BPF prog-id=155 op=UNLOAD Dec 16 13:59:56.767000 audit[3294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3283 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463393839623031663262633566323263656637343638663065643431 Dec 16 13:59:56.767000 audit: BPF prog-id=156 op=LOAD Dec 16 13:59:56.767000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3283 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463393839623031663262633566323263656637343638663065643431 Dec 16 13:59:56.767000 audit: BPF prog-id=157 op=LOAD Dec 16 13:59:56.767000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3283 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463393839623031663262633566323263656637343638663065643431 Dec 16 13:59:56.767000 audit: BPF prog-id=157 op=UNLOAD Dec 16 13:59:56.767000 audit[3294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3283 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463393839623031663262633566323263656637343638663065643431 Dec 16 13:59:56.767000 audit: BPF prog-id=156 op=UNLOAD Dec 16 13:59:56.767000 audit[3294]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3283 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463393839623031663262633566323263656637343638663065643431 Dec 16 13:59:56.768000 audit: BPF prog-id=158 op=LOAD Dec 16 13:59:56.768000 audit[3294]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3283 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.768000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463393839623031663262633566323263656637343638663065643431 Dec 16 13:59:56.808901 containerd[1603]: time="2025-12-16T13:59:56.808224395Z" level=info msg="connecting to shim 1dd54c62b10342a50be1c2567cdc1245ff316f51738f12c7858121e0f6b19873" address="unix:///run/containerd/s/f4ab1d4a005c13eab911f158779271276e9d7e2db83b113b349946c7a18b5ed3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:59:56.849252 kubelet[2843]: E1216 13:59:56.849189 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.849900 kubelet[2843]: W1216 13:59:56.849418 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.849900 kubelet[2843]: E1216 13:59:56.849453 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.850916 kubelet[2843]: E1216 13:59:56.850869 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.850916 kubelet[2843]: W1216 13:59:56.850891 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.851259 kubelet[2843]: E1216 13:59:56.851053 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.851956 kubelet[2843]: E1216 13:59:56.851891 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.851956 kubelet[2843]: W1216 13:59:56.851911 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.852266 kubelet[2843]: E1216 13:59:56.852060 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.853228 kubelet[2843]: E1216 13:59:56.853203 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.853428 kubelet[2843]: W1216 13:59:56.853230 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.853428 kubelet[2843]: E1216 13:59:56.853319 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.854591 kubelet[2843]: E1216 13:59:56.854564 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.854699 kubelet[2843]: W1216 13:59:56.854589 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.855335 kubelet[2843]: E1216 13:59:56.854788 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.855335 kubelet[2843]: I1216 13:59:56.855206 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd72z\" (UniqueName: \"kubernetes.io/projected/fff9659c-3470-45ea-9613-14efa791d03c-kube-api-access-jd72z\") pod \"csi-node-driver-7s56z\" (UID: \"fff9659c-3470-45ea-9613-14efa791d03c\") " pod="calico-system/csi-node-driver-7s56z" Dec 16 13:59:56.856579 kubelet[2843]: E1216 13:59:56.856166 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.856579 kubelet[2843]: W1216 13:59:56.856188 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.856579 kubelet[2843]: E1216 13:59:56.856242 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.856840 kubelet[2843]: E1216 13:59:56.856818 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.856840 kubelet[2843]: W1216 13:59:56.856835 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.856937 kubelet[2843]: E1216 13:59:56.856900 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.857118 kubelet[2843]: I1216 13:59:56.856934 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fff9659c-3470-45ea-9613-14efa791d03c-varrun\") pod \"csi-node-driver-7s56z\" (UID: \"fff9659c-3470-45ea-9613-14efa791d03c\") " pod="calico-system/csi-node-driver-7s56z" Dec 16 13:59:56.857670 kubelet[2843]: E1216 13:59:56.857505 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.857670 kubelet[2843]: W1216 13:59:56.857521 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.857670 kubelet[2843]: E1216 13:59:56.857545 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.858085 kubelet[2843]: E1216 13:59:56.858050 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.858085 kubelet[2843]: W1216 13:59:56.858067 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.858202 kubelet[2843]: E1216 13:59:56.858100 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.858691 kubelet[2843]: E1216 13:59:56.858559 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.858691 kubelet[2843]: W1216 13:59:56.858578 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.858691 kubelet[2843]: E1216 13:59:56.858639 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.860245 kubelet[2843]: I1216 13:59:56.858671 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fff9659c-3470-45ea-9613-14efa791d03c-socket-dir\") pod \"csi-node-driver-7s56z\" (UID: \"fff9659c-3470-45ea-9613-14efa791d03c\") " pod="calico-system/csi-node-driver-7s56z" Dec 16 13:59:56.860245 kubelet[2843]: E1216 13:59:56.859212 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.860245 kubelet[2843]: W1216 13:59:56.859230 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.860245 kubelet[2843]: E1216 13:59:56.859270 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.860245 kubelet[2843]: E1216 13:59:56.859817 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.860245 kubelet[2843]: W1216 13:59:56.859833 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.860245 kubelet[2843]: E1216 13:59:56.859998 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.861515 kubelet[2843]: E1216 13:59:56.860390 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.861515 kubelet[2843]: W1216 13:59:56.860410 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.861515 kubelet[2843]: E1216 13:59:56.860432 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.861515 kubelet[2843]: E1216 13:59:56.861008 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.861515 kubelet[2843]: W1216 13:59:56.861024 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.861515 kubelet[2843]: E1216 13:59:56.861176 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.861876 kubelet[2843]: E1216 13:59:56.861684 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.861876 kubelet[2843]: W1216 13:59:56.861700 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.861876 kubelet[2843]: E1216 13:59:56.861779 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.862341 kubelet[2843]: E1216 13:59:56.862292 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.862341 kubelet[2843]: W1216 13:59:56.862319 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.862582 kubelet[2843]: E1216 13:59:56.862465 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.863572 kubelet[2843]: E1216 13:59:56.863193 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.863572 kubelet[2843]: W1216 13:59:56.863326 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.863572 kubelet[2843]: E1216 13:59:56.863351 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.864118 kubelet[2843]: E1216 13:59:56.863983 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.864325 kubelet[2843]: W1216 13:59:56.864223 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.864325 kubelet[2843]: E1216 13:59:56.864265 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.865178 kubelet[2843]: E1216 13:59:56.865158 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.865499 kubelet[2843]: W1216 13:59:56.865293 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.865499 kubelet[2843]: E1216 13:59:56.865320 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.887999 systemd[1]: Started cri-containerd-1dd54c62b10342a50be1c2567cdc1245ff316f51738f12c7858121e0f6b19873.scope - libcontainer container 1dd54c62b10342a50be1c2567cdc1245ff316f51738f12c7858121e0f6b19873. Dec 16 13:59:56.923000 audit: BPF prog-id=159 op=LOAD Dec 16 13:59:56.925000 audit: BPF prog-id=160 op=LOAD Dec 16 13:59:56.925000 audit[3379]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3367 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643534633632623130333432613530626531633235363763646331 Dec 16 13:59:56.925000 audit: BPF prog-id=160 op=UNLOAD Dec 16 13:59:56.925000 audit[3379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643534633632623130333432613530626531633235363763646331 Dec 16 13:59:56.925000 audit: BPF prog-id=161 op=LOAD Dec 16 13:59:56.925000 audit[3379]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3367 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643534633632623130333432613530626531633235363763646331 Dec 16 13:59:56.925000 audit: BPF prog-id=162 op=LOAD Dec 16 13:59:56.925000 audit[3379]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3367 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643534633632623130333432613530626531633235363763646331 Dec 16 13:59:56.925000 audit: BPF prog-id=162 op=UNLOAD Dec 16 13:59:56.925000 audit[3379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643534633632623130333432613530626531633235363763646331 Dec 16 13:59:56.925000 audit: BPF prog-id=161 op=UNLOAD Dec 16 13:59:56.925000 audit[3379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643534633632623130333432613530626531633235363763646331 Dec 16 13:59:56.925000 audit: BPF prog-id=163 op=LOAD Dec 16 13:59:56.925000 audit[3379]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3367 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:56.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643534633632623130333432613530626531633235363763646331 Dec 16 13:59:56.932306 containerd[1603]: time="2025-12-16T13:59:56.932260827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-796cc8989b-jnp8z,Uid:97086076-4ef6-42a5-a50e-66451105c1cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c989b01f2bc5f22cef7468f0ed411ac80258b8214949b05bc49e1e9394fe7f6\"" Dec 16 13:59:56.935387 containerd[1603]: time="2025-12-16T13:59:56.934923149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:59:56.957308 containerd[1603]: time="2025-12-16T13:59:56.957272002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7829c,Uid:129c5b98-2fc5-42a9-b073-2d0f446af2df,Namespace:calico-system,Attempt:0,} returns sandbox id \"1dd54c62b10342a50be1c2567cdc1245ff316f51738f12c7858121e0f6b19873\"" Dec 16 13:59:56.961283 kubelet[2843]: E1216 13:59:56.961259 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.961534 kubelet[2843]: W1216 13:59:56.961445 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.961773 kubelet[2843]: E1216 13:59:56.961664 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.962208 kubelet[2843]: E1216 13:59:56.962188 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.962514 kubelet[2843]: W1216 13:59:56.962448 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.963200 kubelet[2843]: E1216 13:59:56.962815 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.963569 kubelet[2843]: E1216 13:59:56.963534 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.963758 kubelet[2843]: W1216 13:59:56.963667 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.963758 kubelet[2843]: E1216 13:59:56.963695 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.964306 kubelet[2843]: E1216 13:59:56.964263 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.964306 kubelet[2843]: W1216 13:59:56.964282 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.964623 kubelet[2843]: E1216 13:59:56.964470 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.964867 kubelet[2843]: E1216 13:59:56.964827 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.964867 kubelet[2843]: W1216 13:59:56.964845 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.965056 kubelet[2843]: E1216 13:59:56.965004 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.965520 kubelet[2843]: E1216 13:59:56.965478 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.965520 kubelet[2843]: W1216 13:59:56.965497 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.965856 kubelet[2843]: E1216 13:59:56.965811 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.966181 kubelet[2843]: E1216 13:59:56.966149 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.966181 kubelet[2843]: W1216 13:59:56.966167 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.966402 kubelet[2843]: E1216 13:59:56.966322 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.966540 kubelet[2843]: E1216 13:59:56.966523 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.966600 kubelet[2843]: W1216 13:59:56.966558 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.966600 kubelet[2843]: E1216 13:59:56.966589 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.966990 kubelet[2843]: E1216 13:59:56.966970 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.966990 kubelet[2843]: W1216 13:59:56.966988 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.967286 kubelet[2843]: E1216 13:59:56.967011 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.967663 kubelet[2843]: E1216 13:59:56.967356 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.967663 kubelet[2843]: W1216 13:59:56.967422 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.967663 kubelet[2843]: E1216 13:59:56.967448 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.968122 kubelet[2843]: E1216 13:59:56.968100 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.968198 kubelet[2843]: W1216 13:59:56.968119 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.968261 kubelet[2843]: E1216 13:59:56.968234 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.968720 kubelet[2843]: E1216 13:59:56.968642 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.968720 kubelet[2843]: W1216 13:59:56.968692 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.968720 kubelet[2843]: E1216 13:59:56.968710 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.969615 kubelet[2843]: E1216 13:59:56.969187 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.969615 kubelet[2843]: W1216 13:59:56.969233 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.969615 kubelet[2843]: E1216 13:59:56.969249 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.969810 kubelet[2843]: E1216 13:59:56.969664 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.969810 kubelet[2843]: W1216 13:59:56.969678 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.969810 kubelet[2843]: E1216 13:59:56.969718 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.970791 kubelet[2843]: E1216 13:59:56.970379 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.970791 kubelet[2843]: W1216 13:59:56.970397 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.970791 kubelet[2843]: E1216 13:59:56.970414 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:56.982591 kubelet[2843]: E1216 13:59:56.982555 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:59:56.982591 kubelet[2843]: W1216 13:59:56.982573 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:59:56.982591 kubelet[2843]: E1216 13:59:56.982591 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:59:57.232000 audit[3450]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3450 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:57.232000 audit[3450]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc152f5db0 a2=0 a3=7ffc152f5d9c items=0 ppid=3030 pid=3450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:57.232000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:57.238000 audit[3450]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3450 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 13:59:57.238000 audit[3450]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc152f5db0 a2=0 a3=0 items=0 ppid=3030 pid=3450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:57.238000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 13:59:57.870601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3492941035.mount: Deactivated successfully. Dec 16 13:59:59.030608 kubelet[2843]: E1216 13:59:59.029341 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 13:59:59.053557 containerd[1603]: time="2025-12-16T13:59:59.053480024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:59.054706 containerd[1603]: time="2025-12-16T13:59:59.054619337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 13:59:59.058068 containerd[1603]: time="2025-12-16T13:59:59.057240654Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:59.060907 containerd[1603]: time="2025-12-16T13:59:59.060764660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:59:59.061712 containerd[1603]: time="2025-12-16T13:59:59.061668131Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.126702491s" Dec 16 13:59:59.061817 containerd[1603]: time="2025-12-16T13:59:59.061717808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:59:59.064999 containerd[1603]: time="2025-12-16T13:59:59.064967065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:59:59.090084 containerd[1603]: time="2025-12-16T13:59:59.090031651Z" level=info msg="CreateContainer within sandbox \"4c989b01f2bc5f22cef7468f0ed411ac80258b8214949b05bc49e1e9394fe7f6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:59:59.105530 containerd[1603]: time="2025-12-16T13:59:59.105484810Z" level=info msg="Container ef0ede2f21359a37d4b0e15e4f4ff4e07585473cbc80acd85bc8953b92797ec7: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:59:59.111659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount99750076.mount: Deactivated successfully. Dec 16 13:59:59.125800 containerd[1603]: time="2025-12-16T13:59:59.125657051Z" level=info msg="CreateContainer within sandbox \"4c989b01f2bc5f22cef7468f0ed411ac80258b8214949b05bc49e1e9394fe7f6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ef0ede2f21359a37d4b0e15e4f4ff4e07585473cbc80acd85bc8953b92797ec7\"" Dec 16 13:59:59.129778 containerd[1603]: time="2025-12-16T13:59:59.127970006Z" level=info msg="StartContainer for \"ef0ede2f21359a37d4b0e15e4f4ff4e07585473cbc80acd85bc8953b92797ec7\"" Dec 16 13:59:59.131663 containerd[1603]: time="2025-12-16T13:59:59.129732471Z" level=info msg="connecting to shim ef0ede2f21359a37d4b0e15e4f4ff4e07585473cbc80acd85bc8953b92797ec7" address="unix:///run/containerd/s/aa54ca9727c2b4ae51352b8fe0ef2e5858cda567bed9e6aab1f50804a9b3d728" protocol=ttrpc version=3 Dec 16 13:59:59.170078 systemd[1]: Started cri-containerd-ef0ede2f21359a37d4b0e15e4f4ff4e07585473cbc80acd85bc8953b92797ec7.scope - libcontainer container ef0ede2f21359a37d4b0e15e4f4ff4e07585473cbc80acd85bc8953b92797ec7. Dec 16 13:59:59.201000 audit: BPF prog-id=164 op=LOAD Dec 16 13:59:59.208099 kernel: kauditd_printk_skb: 64 callbacks suppressed Dec 16 13:59:59.208197 kernel: audit: type=1334 audit(1765893599.201:536): prog-id=164 op=LOAD Dec 16 13:59:59.205000 audit: BPF prog-id=165 op=LOAD Dec 16 13:59:59.222698 kernel: audit: type=1334 audit(1765893599.205:537): prog-id=165 op=LOAD Dec 16 13:59:59.205000 audit[3461]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3283 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:59.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566306564653266323133353961333764346230653135653466346666 Dec 16 13:59:59.284138 kernel: audit: type=1300 audit(1765893599.205:537): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3283 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:59.284279 kernel: audit: type=1327 audit(1765893599.205:537): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566306564653266323133353961333764346230653135653466346666 Dec 16 13:59:59.205000 audit: BPF prog-id=165 op=UNLOAD Dec 16 13:59:59.293764 kernel: audit: type=1334 audit(1765893599.205:538): prog-id=165 op=UNLOAD Dec 16 13:59:59.205000 audit[3461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3283 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:59.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566306564653266323133353961333764346230653135653466346666 Dec 16 13:59:59.351306 kernel: audit: type=1300 audit(1765893599.205:538): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3283 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:59.351855 kernel: audit: type=1327 audit(1765893599.205:538): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566306564653266323133353961333764346230653135653466346666 Dec 16 13:59:59.351909 kernel: audit: type=1334 audit(1765893599.205:539): prog-id=166 op=LOAD Dec 16 13:59:59.205000 audit: BPF prog-id=166 op=LOAD Dec 16 13:59:59.205000 audit[3461]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3283 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:59.382553 containerd[1603]: time="2025-12-16T13:59:59.382449984Z" level=info msg="StartContainer for \"ef0ede2f21359a37d4b0e15e4f4ff4e07585473cbc80acd85bc8953b92797ec7\" returns successfully" Dec 16 13:59:59.388170 kernel: audit: type=1300 audit(1765893599.205:539): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3283 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:59.388443 kernel: audit: type=1327 audit(1765893599.205:539): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566306564653266323133353961333764346230653135653466346666 Dec 16 13:59:59.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566306564653266323133353961333764346230653135653466346666 Dec 16 13:59:59.205000 audit: BPF prog-id=167 op=LOAD Dec 16 13:59:59.205000 audit[3461]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3283 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:59.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566306564653266323133353961333764346230653135653466346666 Dec 16 13:59:59.205000 audit: BPF prog-id=167 op=UNLOAD Dec 16 13:59:59.205000 audit[3461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3283 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:59.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566306564653266323133353961333764346230653135653466346666 Dec 16 13:59:59.206000 audit: BPF prog-id=166 op=UNLOAD Dec 16 13:59:59.206000 audit[3461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3283 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:59.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566306564653266323133353961333764346230653135653466346666 Dec 16 13:59:59.206000 audit: BPF prog-id=168 op=LOAD Dec 16 13:59:59.206000 audit[3461]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3283 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 13:59:59.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566306564653266323133353961333764346230653135653466346666 Dec 16 14:00:00.087787 containerd[1603]: time="2025-12-16T14:00:00.087689295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:00:00.089289 containerd[1603]: time="2025-12-16T14:00:00.089050865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Dec 16 14:00:00.090614 containerd[1603]: time="2025-12-16T14:00:00.090568029Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:00:00.093712 containerd[1603]: time="2025-12-16T14:00:00.093667034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:00:00.094699 containerd[1603]: time="2025-12-16T14:00:00.094656983Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.029643835s" Dec 16 14:00:00.095291 containerd[1603]: time="2025-12-16T14:00:00.094850880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 14:00:00.099106 containerd[1603]: time="2025-12-16T14:00:00.099014111Z" level=info msg="CreateContainer within sandbox \"1dd54c62b10342a50be1c2567cdc1245ff316f51738f12c7858121e0f6b19873\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 14:00:00.115008 containerd[1603]: time="2025-12-16T14:00:00.114969901Z" level=info msg="Container 8c283739dfd45d2b0b93067fd11e690192af6b59c3a43c28de297f1a91493c79: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:00:00.129545 containerd[1603]: time="2025-12-16T14:00:00.129490232Z" level=info msg="CreateContainer within sandbox \"1dd54c62b10342a50be1c2567cdc1245ff316f51738f12c7858121e0f6b19873\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8c283739dfd45d2b0b93067fd11e690192af6b59c3a43c28de297f1a91493c79\"" Dec 16 14:00:00.130405 containerd[1603]: time="2025-12-16T14:00:00.130357429Z" level=info msg="StartContainer for \"8c283739dfd45d2b0b93067fd11e690192af6b59c3a43c28de297f1a91493c79\"" Dec 16 14:00:00.133356 containerd[1603]: time="2025-12-16T14:00:00.133262730Z" level=info msg="connecting to shim 8c283739dfd45d2b0b93067fd11e690192af6b59c3a43c28de297f1a91493c79" address="unix:///run/containerd/s/f4ab1d4a005c13eab911f158779271276e9d7e2db83b113b349946c7a18b5ed3" protocol=ttrpc version=3 Dec 16 14:00:00.171036 systemd[1]: Started cri-containerd-8c283739dfd45d2b0b93067fd11e690192af6b59c3a43c28de297f1a91493c79.scope - libcontainer container 8c283739dfd45d2b0b93067fd11e690192af6b59c3a43c28de297f1a91493c79. Dec 16 14:00:00.242000 audit: BPF prog-id=169 op=LOAD Dec 16 14:00:00.242000 audit[3501]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3367 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:00.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863323833373339646664343564326230623933303637666431316536 Dec 16 14:00:00.242000 audit: BPF prog-id=170 op=LOAD Dec 16 14:00:00.242000 audit[3501]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3367 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:00.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863323833373339646664343564326230623933303637666431316536 Dec 16 14:00:00.242000 audit: BPF prog-id=170 op=UNLOAD Dec 16 14:00:00.242000 audit[3501]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:00.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863323833373339646664343564326230623933303637666431316536 Dec 16 14:00:00.242000 audit: BPF prog-id=169 op=UNLOAD Dec 16 14:00:00.242000 audit[3501]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:00.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863323833373339646664343564326230623933303637666431316536 Dec 16 14:00:00.242000 audit: BPF prog-id=171 op=LOAD Dec 16 14:00:00.242000 audit[3501]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3367 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:00.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863323833373339646664343564326230623933303637666431316536 Dec 16 14:00:00.283307 containerd[1603]: time="2025-12-16T14:00:00.283097285Z" level=info msg="StartContainer for \"8c283739dfd45d2b0b93067fd11e690192af6b59c3a43c28de297f1a91493c79\" returns successfully" Dec 16 14:00:00.291532 kubelet[2843]: E1216 14:00:00.291334 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:00:00.291532 kubelet[2843]: W1216 14:00:00.291365 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:00:00.291532 kubelet[2843]: E1216 14:00:00.291464 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:00:00.293720 kubelet[2843]: E1216 14:00:00.293407 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:00:00.293720 kubelet[2843]: W1216 14:00:00.293436 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:00:00.293720 kubelet[2843]: E1216 14:00:00.293482 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:00:00.294821 kubelet[2843]: E1216 14:00:00.294796 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:00:00.295109 kubelet[2843]: W1216 14:00:00.295076 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:00:00.295369 kubelet[2843]: E1216 14:00:00.295216 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:00:00.296821 kubelet[2843]: E1216 14:00:00.296628 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:00:00.296821 kubelet[2843]: W1216 14:00:00.296652 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:00:00.296821 kubelet[2843]: E1216 14:00:00.296673 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:00:00.298726 kubelet[2843]: E1216 14:00:00.298703 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:00:00.299147 kubelet[2843]: W1216 14:00:00.298910 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:00:00.299147 kubelet[2843]: E1216 14:00:00.298936 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:00:00.300034 kubelet[2843]: E1216 14:00:00.299957 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:00:00.300034 kubelet[2843]: W1216 14:00:00.299981 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:00:00.300034 kubelet[2843]: E1216 14:00:00.299999 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:00:00.301169 kubelet[2843]: E1216 14:00:00.301065 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:00:00.301349 kubelet[2843]: W1216 14:00:00.301279 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:00:00.301349 kubelet[2843]: E1216 14:00:00.301302 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:00:00.304035 kubelet[2843]: E1216 14:00:00.304003 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:00:00.304334 kubelet[2843]: W1216 14:00:00.304160 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:00:00.304334 kubelet[2843]: E1216 14:00:00.304187 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:00:00.305135 kubelet[2843]: E1216 14:00:00.305104 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 14:00:00.305404 kubelet[2843]: W1216 14:00:00.305334 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 14:00:00.305404 kubelet[2843]: E1216 14:00:00.305359 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 14:00:00.306394 systemd[1]: cri-containerd-8c283739dfd45d2b0b93067fd11e690192af6b59c3a43c28de297f1a91493c79.scope: Deactivated successfully. Dec 16 14:00:00.309000 audit: BPF prog-id=171 op=UNLOAD Dec 16 14:00:00.315451 containerd[1603]: time="2025-12-16T14:00:00.315400182Z" level=info msg="received container exit event container_id:\"8c283739dfd45d2b0b93067fd11e690192af6b59c3a43c28de297f1a91493c79\" id:\"8c283739dfd45d2b0b93067fd11e690192af6b59c3a43c28de297f1a91493c79\" pid:3513 exited_at:{seconds:1765893600 nanos:314129025}" Dec 16 14:00:00.316922 kubelet[2843]: E1216 14:00:00.316885 2843 cadvisor_stats_provider.go:522] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod129c5b98_2fc5_42a9_b073_2d0f446af2df.slice/cri-containerd-8c283739dfd45d2b0b93067fd11e690192af6b59c3a43c28de297f1a91493c79.scope\": RecentStats: unable to find data in memory cache]" Dec 16 14:00:00.357663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c283739dfd45d2b0b93067fd11e690192af6b59c3a43c28de297f1a91493c79-rootfs.mount: Deactivated successfully. Dec 16 14:00:01.030134 kubelet[2843]: E1216 14:00:01.030014 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 14:00:01.221549 kubelet[2843]: I1216 14:00:01.221101 2843 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 14:00:01.240431 kubelet[2843]: I1216 14:00:01.240356 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-796cc8989b-jnp8z" podStartSLOduration=3.110520753 podStartE2EDuration="5.240335803s" podCreationTimestamp="2025-12-16 13:59:56 +0000 UTC" firstStartedPulling="2025-12-16 13:59:56.934588866 +0000 UTC m=+27.101932145" lastFinishedPulling="2025-12-16 13:59:59.064403913 +0000 UTC m=+29.231747195" observedRunningTime="2025-12-16 14:00:00.23426481 +0000 UTC m=+30.401608099" watchObservedRunningTime="2025-12-16 14:00:01.240335803 +0000 UTC m=+31.407679090" Dec 16 14:00:02.229922 containerd[1603]: time="2025-12-16T14:00:02.229867977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 14:00:03.029963 kubelet[2843]: E1216 14:00:03.029860 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 14:00:05.028970 kubelet[2843]: E1216 14:00:05.028905 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 14:00:05.036954 kubelet[2843]: I1216 14:00:05.036730 2843 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 14:00:05.105824 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 16 14:00:05.108293 kernel: audit: type=1325 audit(1765893605.083:550): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:05.083000 audit[3569]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:05.083000 audit[3569]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc3c368da0 a2=0 a3=7ffc3c368d8c items=0 ppid=3030 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:05.142772 kernel: audit: type=1300 audit(1765893605.083:550): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc3c368da0 a2=0 a3=7ffc3c368d8c items=0 ppid=3030 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:05.083000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:05.143000 audit[3569]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:05.160796 kernel: audit: type=1327 audit(1765893605.083:550): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:05.160867 kernel: audit: type=1325 audit(1765893605.143:551): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:05.143000 audit[3569]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc3c368da0 a2=0 a3=7ffc3c368d8c items=0 ppid=3030 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:05.209229 kernel: audit: type=1300 audit(1765893605.143:551): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc3c368da0 a2=0 a3=7ffc3c368d8c items=0 ppid=3030 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:05.209336 kernel: audit: type=1327 audit(1765893605.143:551): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:05.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:07.030059 kubelet[2843]: E1216 14:00:07.029923 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 14:00:09.029360 kubelet[2843]: E1216 14:00:09.029292 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 14:00:10.540061 containerd[1603]: time="2025-12-16T14:00:10.539990878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:00:10.541262 containerd[1603]: time="2025-12-16T14:00:10.541217391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 14:00:10.543727 containerd[1603]: time="2025-12-16T14:00:10.542146413Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:00:10.544880 containerd[1603]: time="2025-12-16T14:00:10.544838337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:00:10.545835 containerd[1603]: time="2025-12-16T14:00:10.545797633Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 8.31558363s" Dec 16 14:00:10.546012 containerd[1603]: time="2025-12-16T14:00:10.545983044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 14:00:10.550000 containerd[1603]: time="2025-12-16T14:00:10.549938613Z" level=info msg="CreateContainer within sandbox \"1dd54c62b10342a50be1c2567cdc1245ff316f51738f12c7858121e0f6b19873\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 14:00:10.563774 containerd[1603]: time="2025-12-16T14:00:10.559819051Z" level=info msg="Container f756108b282d0a734c66046569282d9d18f6626f089e7c5c36e8218c326a1635: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:00:10.576602 containerd[1603]: time="2025-12-16T14:00:10.576549424Z" level=info msg="CreateContainer within sandbox \"1dd54c62b10342a50be1c2567cdc1245ff316f51738f12c7858121e0f6b19873\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f756108b282d0a734c66046569282d9d18f6626f089e7c5c36e8218c326a1635\"" Dec 16 14:00:10.577469 containerd[1603]: time="2025-12-16T14:00:10.577408498Z" level=info msg="StartContainer for \"f756108b282d0a734c66046569282d9d18f6626f089e7c5c36e8218c326a1635\"" Dec 16 14:00:10.579826 containerd[1603]: time="2025-12-16T14:00:10.579764650Z" level=info msg="connecting to shim f756108b282d0a734c66046569282d9d18f6626f089e7c5c36e8218c326a1635" address="unix:///run/containerd/s/f4ab1d4a005c13eab911f158779271276e9d7e2db83b113b349946c7a18b5ed3" protocol=ttrpc version=3 Dec 16 14:00:10.613041 systemd[1]: Started cri-containerd-f756108b282d0a734c66046569282d9d18f6626f089e7c5c36e8218c326a1635.scope - libcontainer container f756108b282d0a734c66046569282d9d18f6626f089e7c5c36e8218c326a1635. Dec 16 14:00:10.670000 audit: BPF prog-id=172 op=LOAD Dec 16 14:00:10.670000 audit[3581]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3367 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:10.707944 kernel: audit: type=1334 audit(1765893610.670:552): prog-id=172 op=LOAD Dec 16 14:00:10.708091 kernel: audit: type=1300 audit(1765893610.670:552): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3367 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:10.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353631303862323832643061373334633636303436353639323832 Dec 16 14:00:10.739848 kernel: audit: type=1327 audit(1765893610.670:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353631303862323832643061373334633636303436353639323832 Dec 16 14:00:10.670000 audit: BPF prog-id=173 op=LOAD Dec 16 14:00:10.747777 kernel: audit: type=1334 audit(1765893610.670:553): prog-id=173 op=LOAD Dec 16 14:00:10.670000 audit[3581]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3367 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:10.782859 kernel: audit: type=1300 audit(1765893610.670:553): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3367 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:10.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353631303862323832643061373334633636303436353639323832 Dec 16 14:00:10.797082 containerd[1603]: time="2025-12-16T14:00:10.796962705Z" level=info msg="StartContainer for \"f756108b282d0a734c66046569282d9d18f6626f089e7c5c36e8218c326a1635\" returns successfully" Dec 16 14:00:10.670000 audit: BPF prog-id=173 op=UNLOAD Dec 16 14:00:10.824954 kernel: audit: type=1327 audit(1765893610.670:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353631303862323832643061373334633636303436353639323832 Dec 16 14:00:10.825063 kernel: audit: type=1334 audit(1765893610.670:554): prog-id=173 op=UNLOAD Dec 16 14:00:10.825125 kernel: audit: type=1300 audit(1765893610.670:554): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:10.670000 audit[3581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:10.882463 kernel: audit: type=1327 audit(1765893610.670:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353631303862323832643061373334633636303436353639323832 Dec 16 14:00:10.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353631303862323832643061373334633636303436353639323832 Dec 16 14:00:10.670000 audit: BPF prog-id=172 op=UNLOAD Dec 16 14:00:10.670000 audit[3581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:10.891824 kernel: audit: type=1334 audit(1765893610.670:555): prog-id=172 op=UNLOAD Dec 16 14:00:10.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353631303862323832643061373334633636303436353639323832 Dec 16 14:00:10.670000 audit: BPF prog-id=174 op=LOAD Dec 16 14:00:10.670000 audit[3581]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3367 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:10.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637353631303862323832643061373334633636303436353639323832 Dec 16 14:00:11.029826 kubelet[2843]: E1216 14:00:11.029714 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 14:00:11.887185 systemd[1]: cri-containerd-f756108b282d0a734c66046569282d9d18f6626f089e7c5c36e8218c326a1635.scope: Deactivated successfully. Dec 16 14:00:11.888206 systemd[1]: cri-containerd-f756108b282d0a734c66046569282d9d18f6626f089e7c5c36e8218c326a1635.scope: Consumed 691ms CPU time, 198.4M memory peak, 171.3M written to disk. Dec 16 14:00:11.891373 containerd[1603]: time="2025-12-16T14:00:11.891310980Z" level=info msg="received container exit event container_id:\"f756108b282d0a734c66046569282d9d18f6626f089e7c5c36e8218c326a1635\" id:\"f756108b282d0a734c66046569282d9d18f6626f089e7c5c36e8218c326a1635\" pid:3594 exited_at:{seconds:1765893611 nanos:890933988}" Dec 16 14:00:11.892000 audit: BPF prog-id=174 op=UNLOAD Dec 16 14:00:11.925125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f756108b282d0a734c66046569282d9d18f6626f089e7c5c36e8218c326a1635-rootfs.mount: Deactivated successfully. Dec 16 14:00:11.976448 kubelet[2843]: I1216 14:00:11.976201 2843 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 14:00:12.028208 systemd[1]: Created slice kubepods-burstable-pod4e270e1c_6e68_4a70_8c3e_e3b7e23d7ba4.slice - libcontainer container kubepods-burstable-pod4e270e1c_6e68_4a70_8c3e_e3b7e23d7ba4.slice. Dec 16 14:00:12.071172 systemd[1]: Created slice kubepods-burstable-pod3882b1ea_b8cc_4201_890d_09890488c736.slice - libcontainer container kubepods-burstable-pod3882b1ea_b8cc_4201_890d_09890488c736.slice. Dec 16 14:00:12.099069 systemd[1]: Created slice kubepods-besteffort-pod61b00035_2995_44f7_ae62_3ec89692e439.slice - libcontainer container kubepods-besteffort-pod61b00035_2995_44f7_ae62_3ec89692e439.slice. Dec 16 14:00:12.107376 systemd[1]: Created slice kubepods-besteffort-pod8027bcee_b711_46ed_b726_80ae4d1f28c2.slice - libcontainer container kubepods-besteffort-pod8027bcee_b711_46ed_b726_80ae4d1f28c2.slice. Dec 16 14:00:12.168329 kubelet[2843]: I1216 14:00:12.118401 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e270e1c-6e68-4a70-8c3e-e3b7e23d7ba4-config-volume\") pod \"coredns-668d6bf9bc-k7tpf\" (UID: \"4e270e1c-6e68-4a70-8c3e-e3b7e23d7ba4\") " pod="kube-system/coredns-668d6bf9bc-k7tpf" Dec 16 14:00:12.168329 kubelet[2843]: I1216 14:00:12.118790 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w984r\" (UniqueName: \"kubernetes.io/projected/3882b1ea-b8cc-4201-890d-09890488c736-kube-api-access-w984r\") pod \"coredns-668d6bf9bc-tnb5b\" (UID: \"3882b1ea-b8cc-4201-890d-09890488c736\") " pod="kube-system/coredns-668d6bf9bc-tnb5b" Dec 16 14:00:12.168329 kubelet[2843]: I1216 14:00:12.118948 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpfdw\" (UniqueName: \"kubernetes.io/projected/4e270e1c-6e68-4a70-8c3e-e3b7e23d7ba4-kube-api-access-hpfdw\") pod \"coredns-668d6bf9bc-k7tpf\" (UID: \"4e270e1c-6e68-4a70-8c3e-e3b7e23d7ba4\") " pod="kube-system/coredns-668d6bf9bc-k7tpf" Dec 16 14:00:12.168329 kubelet[2843]: I1216 14:00:12.118986 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3882b1ea-b8cc-4201-890d-09890488c736-config-volume\") pod \"coredns-668d6bf9bc-tnb5b\" (UID: \"3882b1ea-b8cc-4201-890d-09890488c736\") " pod="kube-system/coredns-668d6bf9bc-tnb5b" Dec 16 14:00:12.120683 systemd[1]: Created slice kubepods-besteffort-pod3ee33909_6767_4f65_befa_f64702fcbe38.slice - libcontainer container kubepods-besteffort-pod3ee33909_6767_4f65_befa_f64702fcbe38.slice. Dec 16 14:00:12.135055 systemd[1]: Created slice kubepods-besteffort-podd41be582_68e3_4041_abac_e335f6c6ba13.slice - libcontainer container kubepods-besteffort-podd41be582_68e3_4041_abac_e335f6c6ba13.slice. Dec 16 14:00:12.148380 systemd[1]: Created slice kubepods-besteffort-pod1769a332_0974_4355_84f4_605660a8e93f.slice - libcontainer container kubepods-besteffort-pod1769a332_0974_4355_84f4_605660a8e93f.slice. Dec 16 14:00:12.219955 kubelet[2843]: I1216 14:00:12.219653 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1769a332-0974-4355-84f4-605660a8e93f-tigera-ca-bundle\") pod \"calico-kube-controllers-6d8d578c6b-htjdj\" (UID: \"1769a332-0974-4355-84f4-605660a8e93f\") " pod="calico-system/calico-kube-controllers-6d8d578c6b-htjdj" Dec 16 14:00:12.219955 kubelet[2843]: I1216 14:00:12.219771 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2cq8\" (UniqueName: \"kubernetes.io/projected/1769a332-0974-4355-84f4-605660a8e93f-kube-api-access-d2cq8\") pod \"calico-kube-controllers-6d8d578c6b-htjdj\" (UID: \"1769a332-0974-4355-84f4-605660a8e93f\") " pod="calico-system/calico-kube-controllers-6d8d578c6b-htjdj" Dec 16 14:00:12.219955 kubelet[2843]: I1216 14:00:12.219811 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61b00035-2995-44f7-ae62-3ec89692e439-config\") pod \"goldmane-666569f655-fdrzj\" (UID: \"61b00035-2995-44f7-ae62-3ec89692e439\") " pod="calico-system/goldmane-666569f655-fdrzj" Dec 16 14:00:12.219955 kubelet[2843]: I1216 14:00:12.219861 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3ee33909-6767-4f65-befa-f64702fcbe38-calico-apiserver-certs\") pod \"calico-apiserver-66cd68bd7b-v9gp2\" (UID: \"3ee33909-6767-4f65-befa-f64702fcbe38\") " pod="calico-apiserver/calico-apiserver-66cd68bd7b-v9gp2" Dec 16 14:00:12.219955 kubelet[2843]: I1216 14:00:12.219899 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw9wz\" (UniqueName: \"kubernetes.io/projected/d41be582-68e3-4041-abac-e335f6c6ba13-kube-api-access-hw9wz\") pod \"calico-apiserver-66cd68bd7b-ds2n7\" (UID: \"d41be582-68e3-4041-abac-e335f6c6ba13\") " pod="calico-apiserver/calico-apiserver-66cd68bd7b-ds2n7" Dec 16 14:00:12.220356 kubelet[2843]: I1216 14:00:12.219983 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d41be582-68e3-4041-abac-e335f6c6ba13-calico-apiserver-certs\") pod \"calico-apiserver-66cd68bd7b-ds2n7\" (UID: \"d41be582-68e3-4041-abac-e335f6c6ba13\") " pod="calico-apiserver/calico-apiserver-66cd68bd7b-ds2n7" Dec 16 14:00:12.220356 kubelet[2843]: I1216 14:00:12.220029 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbzd6\" (UniqueName: \"kubernetes.io/projected/61b00035-2995-44f7-ae62-3ec89692e439-kube-api-access-gbzd6\") pod \"goldmane-666569f655-fdrzj\" (UID: \"61b00035-2995-44f7-ae62-3ec89692e439\") " pod="calico-system/goldmane-666569f655-fdrzj" Dec 16 14:00:12.220356 kubelet[2843]: I1216 14:00:12.220135 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8027bcee-b711-46ed-b726-80ae4d1f28c2-whisker-backend-key-pair\") pod \"whisker-6574dc5598-22cdf\" (UID: \"8027bcee-b711-46ed-b726-80ae4d1f28c2\") " pod="calico-system/whisker-6574dc5598-22cdf" Dec 16 14:00:12.220356 kubelet[2843]: I1216 14:00:12.220163 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8027bcee-b711-46ed-b726-80ae4d1f28c2-whisker-ca-bundle\") pod \"whisker-6574dc5598-22cdf\" (UID: \"8027bcee-b711-46ed-b726-80ae4d1f28c2\") " pod="calico-system/whisker-6574dc5598-22cdf" Dec 16 14:00:12.220356 kubelet[2843]: I1216 14:00:12.220211 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmx72\" (UniqueName: \"kubernetes.io/projected/8027bcee-b711-46ed-b726-80ae4d1f28c2-kube-api-access-mmx72\") pod \"whisker-6574dc5598-22cdf\" (UID: \"8027bcee-b711-46ed-b726-80ae4d1f28c2\") " pod="calico-system/whisker-6574dc5598-22cdf" Dec 16 14:00:12.220625 kubelet[2843]: I1216 14:00:12.220250 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61b00035-2995-44f7-ae62-3ec89692e439-goldmane-ca-bundle\") pod \"goldmane-666569f655-fdrzj\" (UID: \"61b00035-2995-44f7-ae62-3ec89692e439\") " pod="calico-system/goldmane-666569f655-fdrzj" Dec 16 14:00:12.220625 kubelet[2843]: I1216 14:00:12.220299 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ftvw\" (UniqueName: \"kubernetes.io/projected/3ee33909-6767-4f65-befa-f64702fcbe38-kube-api-access-5ftvw\") pod \"calico-apiserver-66cd68bd7b-v9gp2\" (UID: \"3ee33909-6767-4f65-befa-f64702fcbe38\") " pod="calico-apiserver/calico-apiserver-66cd68bd7b-v9gp2" Dec 16 14:00:12.220625 kubelet[2843]: I1216 14:00:12.220331 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/61b00035-2995-44f7-ae62-3ec89692e439-goldmane-key-pair\") pod \"goldmane-666569f655-fdrzj\" (UID: \"61b00035-2995-44f7-ae62-3ec89692e439\") " pod="calico-system/goldmane-666569f655-fdrzj" Dec 16 14:00:12.376667 containerd[1603]: time="2025-12-16T14:00:12.376416634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k7tpf,Uid:4e270e1c-6e68-4a70-8c3e-e3b7e23d7ba4,Namespace:kube-system,Attempt:0,}" Dec 16 14:00:12.387538 containerd[1603]: time="2025-12-16T14:00:12.387391276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tnb5b,Uid:3882b1ea-b8cc-4201-890d-09890488c736,Namespace:kube-system,Attempt:0,}" Dec 16 14:00:12.470499 containerd[1603]: time="2025-12-16T14:00:12.470357477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fdrzj,Uid:61b00035-2995-44f7-ae62-3ec89692e439,Namespace:calico-system,Attempt:0,}" Dec 16 14:00:12.474276 containerd[1603]: time="2025-12-16T14:00:12.474234166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cd68bd7b-v9gp2,Uid:3ee33909-6767-4f65-befa-f64702fcbe38,Namespace:calico-apiserver,Attempt:0,}" Dec 16 14:00:12.477279 containerd[1603]: time="2025-12-16T14:00:12.477217665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d8d578c6b-htjdj,Uid:1769a332-0974-4355-84f4-605660a8e93f,Namespace:calico-system,Attempt:0,}" Dec 16 14:00:12.477779 containerd[1603]: time="2025-12-16T14:00:12.477423657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cd68bd7b-ds2n7,Uid:d41be582-68e3-4041-abac-e335f6c6ba13,Namespace:calico-apiserver,Attempt:0,}" Dec 16 14:00:12.477912 containerd[1603]: time="2025-12-16T14:00:12.477670985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6574dc5598-22cdf,Uid:8027bcee-b711-46ed-b726-80ae4d1f28c2,Namespace:calico-system,Attempt:0,}" Dec 16 14:00:12.822297 containerd[1603]: time="2025-12-16T14:00:12.822169326Z" level=error msg="Failed to destroy network for sandbox \"98f97538f03e96d7fc1966a33241a7bee841ba347ba918296c62ce4f5c584e64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.826147 containerd[1603]: time="2025-12-16T14:00:12.825907800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k7tpf,Uid:4e270e1c-6e68-4a70-8c3e-e3b7e23d7ba4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"98f97538f03e96d7fc1966a33241a7bee841ba347ba918296c62ce4f5c584e64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.826342 kubelet[2843]: E1216 14:00:12.826189 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98f97538f03e96d7fc1966a33241a7bee841ba347ba918296c62ce4f5c584e64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.826342 kubelet[2843]: E1216 14:00:12.826273 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98f97538f03e96d7fc1966a33241a7bee841ba347ba918296c62ce4f5c584e64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k7tpf" Dec 16 14:00:12.826342 kubelet[2843]: E1216 14:00:12.826310 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98f97538f03e96d7fc1966a33241a7bee841ba347ba918296c62ce4f5c584e64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k7tpf" Dec 16 14:00:12.826532 kubelet[2843]: E1216 14:00:12.826373 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k7tpf_kube-system(4e270e1c-6e68-4a70-8c3e-e3b7e23d7ba4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k7tpf_kube-system(4e270e1c-6e68-4a70-8c3e-e3b7e23d7ba4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98f97538f03e96d7fc1966a33241a7bee841ba347ba918296c62ce4f5c584e64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k7tpf" podUID="4e270e1c-6e68-4a70-8c3e-e3b7e23d7ba4" Dec 16 14:00:12.834381 containerd[1603]: time="2025-12-16T14:00:12.834254134Z" level=error msg="Failed to destroy network for sandbox \"d87c936bdbaa01f002eed67d04b4c76f1cdcf544241742c0e17b9a4280605839\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.845833 containerd[1603]: time="2025-12-16T14:00:12.845314518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6574dc5598-22cdf,Uid:8027bcee-b711-46ed-b726-80ae4d1f28c2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87c936bdbaa01f002eed67d04b4c76f1cdcf544241742c0e17b9a4280605839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.846021 kubelet[2843]: E1216 14:00:12.845591 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87c936bdbaa01f002eed67d04b4c76f1cdcf544241742c0e17b9a4280605839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.846021 kubelet[2843]: E1216 14:00:12.845657 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87c936bdbaa01f002eed67d04b4c76f1cdcf544241742c0e17b9a4280605839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6574dc5598-22cdf" Dec 16 14:00:12.846021 kubelet[2843]: E1216 14:00:12.845688 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d87c936bdbaa01f002eed67d04b4c76f1cdcf544241742c0e17b9a4280605839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6574dc5598-22cdf" Dec 16 14:00:12.846220 kubelet[2843]: E1216 14:00:12.845844 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6574dc5598-22cdf_calico-system(8027bcee-b711-46ed-b726-80ae4d1f28c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6574dc5598-22cdf_calico-system(8027bcee-b711-46ed-b726-80ae4d1f28c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d87c936bdbaa01f002eed67d04b4c76f1cdcf544241742c0e17b9a4280605839\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6574dc5598-22cdf" podUID="8027bcee-b711-46ed-b726-80ae4d1f28c2" Dec 16 14:00:12.846618 containerd[1603]: time="2025-12-16T14:00:12.846571206Z" level=error msg="Failed to destroy network for sandbox \"bc85867a2d9bed65069bdf393991d9b27759f53d76790ebf3f3dabc2738b3edb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.849792 containerd[1603]: time="2025-12-16T14:00:12.848868527Z" level=error msg="Failed to destroy network for sandbox \"2b8f11079fc62356f6faa2fe19c649cd9c8cea1cfd30ca66637268e5bd565715\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.853611 containerd[1603]: time="2025-12-16T14:00:12.853557554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d8d578c6b-htjdj,Uid:1769a332-0974-4355-84f4-605660a8e93f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc85867a2d9bed65069bdf393991d9b27759f53d76790ebf3f3dabc2738b3edb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.854418 kubelet[2843]: E1216 14:00:12.853942 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc85867a2d9bed65069bdf393991d9b27759f53d76790ebf3f3dabc2738b3edb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.854418 kubelet[2843]: E1216 14:00:12.854011 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc85867a2d9bed65069bdf393991d9b27759f53d76790ebf3f3dabc2738b3edb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d8d578c6b-htjdj" Dec 16 14:00:12.854418 kubelet[2843]: E1216 14:00:12.854047 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc85867a2d9bed65069bdf393991d9b27759f53d76790ebf3f3dabc2738b3edb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d8d578c6b-htjdj" Dec 16 14:00:12.854704 kubelet[2843]: E1216 14:00:12.854169 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d8d578c6b-htjdj_calico-system(1769a332-0974-4355-84f4-605660a8e93f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d8d578c6b-htjdj_calico-system(1769a332-0974-4355-84f4-605660a8e93f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc85867a2d9bed65069bdf393991d9b27759f53d76790ebf3f3dabc2738b3edb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d8d578c6b-htjdj" podUID="1769a332-0974-4355-84f4-605660a8e93f" Dec 16 14:00:12.859448 containerd[1603]: time="2025-12-16T14:00:12.859395448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tnb5b,Uid:3882b1ea-b8cc-4201-890d-09890488c736,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b8f11079fc62356f6faa2fe19c649cd9c8cea1cfd30ca66637268e5bd565715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.860085 kubelet[2843]: E1216 14:00:12.859918 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b8f11079fc62356f6faa2fe19c649cd9c8cea1cfd30ca66637268e5bd565715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.860085 kubelet[2843]: E1216 14:00:12.860005 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b8f11079fc62356f6faa2fe19c649cd9c8cea1cfd30ca66637268e5bd565715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tnb5b" Dec 16 14:00:12.861400 kubelet[2843]: E1216 14:00:12.860035 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b8f11079fc62356f6faa2fe19c649cd9c8cea1cfd30ca66637268e5bd565715\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tnb5b" Dec 16 14:00:12.863498 kubelet[2843]: E1216 14:00:12.862678 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tnb5b_kube-system(3882b1ea-b8cc-4201-890d-09890488c736)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tnb5b_kube-system(3882b1ea-b8cc-4201-890d-09890488c736)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b8f11079fc62356f6faa2fe19c649cd9c8cea1cfd30ca66637268e5bd565715\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tnb5b" podUID="3882b1ea-b8cc-4201-890d-09890488c736" Dec 16 14:00:12.877553 containerd[1603]: time="2025-12-16T14:00:12.877162394Z" level=error msg="Failed to destroy network for sandbox \"f48f929712c7063654535b48beb7f51c0b6d8d3599b5b1bfa96619aa89e3c5ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.879562 containerd[1603]: time="2025-12-16T14:00:12.879499279Z" level=error msg="Failed to destroy network for sandbox \"bc141a4db13acf1524c78660808b8b6ae4ce702945f568c8cef8ed34125208d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.880206 containerd[1603]: time="2025-12-16T14:00:12.880156581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fdrzj,Uid:61b00035-2995-44f7-ae62-3ec89692e439,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f48f929712c7063654535b48beb7f51c0b6d8d3599b5b1bfa96619aa89e3c5ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.880843 kubelet[2843]: E1216 14:00:12.880524 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f48f929712c7063654535b48beb7f51c0b6d8d3599b5b1bfa96619aa89e3c5ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.880843 kubelet[2843]: E1216 14:00:12.880583 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f48f929712c7063654535b48beb7f51c0b6d8d3599b5b1bfa96619aa89e3c5ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fdrzj" Dec 16 14:00:12.880843 kubelet[2843]: E1216 14:00:12.880615 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f48f929712c7063654535b48beb7f51c0b6d8d3599b5b1bfa96619aa89e3c5ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fdrzj" Dec 16 14:00:12.881503 kubelet[2843]: E1216 14:00:12.880676 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-fdrzj_calico-system(61b00035-2995-44f7-ae62-3ec89692e439)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-fdrzj_calico-system(61b00035-2995-44f7-ae62-3ec89692e439)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f48f929712c7063654535b48beb7f51c0b6d8d3599b5b1bfa96619aa89e3c5ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-fdrzj" podUID="61b00035-2995-44f7-ae62-3ec89692e439" Dec 16 14:00:12.883555 containerd[1603]: time="2025-12-16T14:00:12.883497681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cd68bd7b-v9gp2,Uid:3ee33909-6767-4f65-befa-f64702fcbe38,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc141a4db13acf1524c78660808b8b6ae4ce702945f568c8cef8ed34125208d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.884136 kubelet[2843]: E1216 14:00:12.883894 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc141a4db13acf1524c78660808b8b6ae4ce702945f568c8cef8ed34125208d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.884136 kubelet[2843]: E1216 14:00:12.883952 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc141a4db13acf1524c78660808b8b6ae4ce702945f568c8cef8ed34125208d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66cd68bd7b-v9gp2" Dec 16 14:00:12.884136 kubelet[2843]: E1216 14:00:12.883980 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc141a4db13acf1524c78660808b8b6ae4ce702945f568c8cef8ed34125208d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66cd68bd7b-v9gp2" Dec 16 14:00:12.884351 kubelet[2843]: E1216 14:00:12.884041 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66cd68bd7b-v9gp2_calico-apiserver(3ee33909-6767-4f65-befa-f64702fcbe38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66cd68bd7b-v9gp2_calico-apiserver(3ee33909-6767-4f65-befa-f64702fcbe38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc141a4db13acf1524c78660808b8b6ae4ce702945f568c8cef8ed34125208d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-v9gp2" podUID="3ee33909-6767-4f65-befa-f64702fcbe38" Dec 16 14:00:12.885765 containerd[1603]: time="2025-12-16T14:00:12.885679127Z" level=error msg="Failed to destroy network for sandbox \"e6453c55b072acf811c0b351ba5c5eaeeb9aae9af551ee6d6956a257b774c3e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.888547 containerd[1603]: time="2025-12-16T14:00:12.888495013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cd68bd7b-ds2n7,Uid:d41be582-68e3-4041-abac-e335f6c6ba13,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6453c55b072acf811c0b351ba5c5eaeeb9aae9af551ee6d6956a257b774c3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.888819 kubelet[2843]: E1216 14:00:12.888735 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6453c55b072acf811c0b351ba5c5eaeeb9aae9af551ee6d6956a257b774c3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:12.888947 kubelet[2843]: E1216 14:00:12.888842 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6453c55b072acf811c0b351ba5c5eaeeb9aae9af551ee6d6956a257b774c3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66cd68bd7b-ds2n7" Dec 16 14:00:12.888947 kubelet[2843]: E1216 14:00:12.888871 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6453c55b072acf811c0b351ba5c5eaeeb9aae9af551ee6d6956a257b774c3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66cd68bd7b-ds2n7" Dec 16 14:00:12.889191 kubelet[2843]: E1216 14:00:12.889042 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66cd68bd7b-ds2n7_calico-apiserver(d41be582-68e3-4041-abac-e335f6c6ba13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66cd68bd7b-ds2n7_calico-apiserver(d41be582-68e3-4041-abac-e335f6c6ba13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6453c55b072acf811c0b351ba5c5eaeeb9aae9af551ee6d6956a257b774c3e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-ds2n7" podUID="d41be582-68e3-4041-abac-e335f6c6ba13" Dec 16 14:00:13.037637 systemd[1]: Created slice kubepods-besteffort-podfff9659c_3470_45ea_9613_14efa791d03c.slice - libcontainer container kubepods-besteffort-podfff9659c_3470_45ea_9613_14efa791d03c.slice. Dec 16 14:00:13.041949 containerd[1603]: time="2025-12-16T14:00:13.041904713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7s56z,Uid:fff9659c-3470-45ea-9613-14efa791d03c,Namespace:calico-system,Attempt:0,}" Dec 16 14:00:13.118809 containerd[1603]: time="2025-12-16T14:00:13.114606959Z" level=error msg="Failed to destroy network for sandbox \"b112a9753146c47d9ccd6d46e7e364847572f31a7242f0cd35fec0c198f0adc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:13.120481 systemd[1]: run-netns-cni\x2dc59de103\x2dc7c2\x2dec8c\x2dee1d\x2dc7135653dc6b.mount: Deactivated successfully. Dec 16 14:00:13.122378 containerd[1603]: time="2025-12-16T14:00:13.122229410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7s56z,Uid:fff9659c-3470-45ea-9613-14efa791d03c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b112a9753146c47d9ccd6d46e7e364847572f31a7242f0cd35fec0c198f0adc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:13.123259 kubelet[2843]: E1216 14:00:13.123019 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b112a9753146c47d9ccd6d46e7e364847572f31a7242f0cd35fec0c198f0adc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 14:00:13.123259 kubelet[2843]: E1216 14:00:13.123099 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b112a9753146c47d9ccd6d46e7e364847572f31a7242f0cd35fec0c198f0adc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7s56z" Dec 16 14:00:13.123259 kubelet[2843]: E1216 14:00:13.123140 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b112a9753146c47d9ccd6d46e7e364847572f31a7242f0cd35fec0c198f0adc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7s56z" Dec 16 14:00:13.123689 kubelet[2843]: E1216 14:00:13.123641 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7s56z_calico-system(fff9659c-3470-45ea-9613-14efa791d03c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7s56z_calico-system(fff9659c-3470-45ea-9613-14efa791d03c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b112a9753146c47d9ccd6d46e7e364847572f31a7242f0cd35fec0c198f0adc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 14:00:13.266556 containerd[1603]: time="2025-12-16T14:00:13.266439234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 14:00:22.311183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3046932486.mount: Deactivated successfully. Dec 16 14:00:22.346319 containerd[1603]: time="2025-12-16T14:00:22.346237754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:00:22.348130 containerd[1603]: time="2025-12-16T14:00:22.347855556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 14:00:22.349529 containerd[1603]: time="2025-12-16T14:00:22.349479330Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:00:22.353175 containerd[1603]: time="2025-12-16T14:00:22.353127190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 14:00:22.354043 containerd[1603]: time="2025-12-16T14:00:22.353992772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.087068879s" Dec 16 14:00:22.354158 containerd[1603]: time="2025-12-16T14:00:22.354046068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 14:00:22.383257 containerd[1603]: time="2025-12-16T14:00:22.381730706Z" level=info msg="CreateContainer within sandbox \"1dd54c62b10342a50be1c2567cdc1245ff316f51738f12c7858121e0f6b19873\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 14:00:22.409815 containerd[1603]: time="2025-12-16T14:00:22.409723474Z" level=info msg="Container f53c71e6f38712368b9d19edbaf916ef94600e91b46c404fc5c604fe3c9ccbe6: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:00:22.422315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909419804.mount: Deactivated successfully. Dec 16 14:00:22.430400 containerd[1603]: time="2025-12-16T14:00:22.430188251Z" level=info msg="CreateContainer within sandbox \"1dd54c62b10342a50be1c2567cdc1245ff316f51738f12c7858121e0f6b19873\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f53c71e6f38712368b9d19edbaf916ef94600e91b46c404fc5c604fe3c9ccbe6\"" Dec 16 14:00:22.430400 containerd[1603]: time="2025-12-16T14:00:22.430892278Z" level=info msg="StartContainer for \"f53c71e6f38712368b9d19edbaf916ef94600e91b46c404fc5c604fe3c9ccbe6\"" Dec 16 14:00:22.433504 containerd[1603]: time="2025-12-16T14:00:22.433462400Z" level=info msg="connecting to shim f53c71e6f38712368b9d19edbaf916ef94600e91b46c404fc5c604fe3c9ccbe6" address="unix:///run/containerd/s/f4ab1d4a005c13eab911f158779271276e9d7e2db83b113b349946c7a18b5ed3" protocol=ttrpc version=3 Dec 16 14:00:22.473322 systemd[1]: Started cri-containerd-f53c71e6f38712368b9d19edbaf916ef94600e91b46c404fc5c604fe3c9ccbe6.scope - libcontainer container f53c71e6f38712368b9d19edbaf916ef94600e91b46c404fc5c604fe3c9ccbe6. Dec 16 14:00:22.562859 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 14:00:22.563101 kernel: audit: type=1334 audit(1765893622.556:558): prog-id=175 op=LOAD Dec 16 14:00:22.556000 audit: BPF prog-id=175 op=LOAD Dec 16 14:00:22.556000 audit[3854]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=3367 pid=3854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:22.605825 kernel: audit: type=1300 audit(1765893622.556:558): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=3367 pid=3854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:22.606240 kernel: audit: type=1327 audit(1765893622.556:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635336337316536663338373132333638623964313965646261663931 Dec 16 14:00:22.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635336337316536663338373132333638623964313965646261663931 Dec 16 14:00:22.644279 kernel: audit: type=1334 audit(1765893622.556:559): prog-id=176 op=LOAD Dec 16 14:00:22.556000 audit: BPF prog-id=176 op=LOAD Dec 16 14:00:22.556000 audit[3854]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=3367 pid=3854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:22.674188 kernel: audit: type=1300 audit(1765893622.556:559): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=3367 pid=3854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:22.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635336337316536663338373132333638623964313965646261663931 Dec 16 14:00:22.703024 kernel: audit: type=1327 audit(1765893622.556:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635336337316536663338373132333638623964313965646261663931 Dec 16 14:00:22.556000 audit: BPF prog-id=176 op=UNLOAD Dec 16 14:00:22.556000 audit[3854]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:22.740287 kernel: audit: type=1334 audit(1765893622.556:560): prog-id=176 op=UNLOAD Dec 16 14:00:22.740435 kernel: audit: type=1300 audit(1765893622.556:560): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:22.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635336337316536663338373132333638623964313965646261663931 Dec 16 14:00:22.773534 containerd[1603]: time="2025-12-16T14:00:22.750710317Z" level=info msg="StartContainer for \"f53c71e6f38712368b9d19edbaf916ef94600e91b46c404fc5c604fe3c9ccbe6\" returns successfully" Dec 16 14:00:22.774227 kernel: audit: type=1327 audit(1765893622.556:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635336337316536663338373132333638623964313965646261663931 Dec 16 14:00:22.556000 audit: BPF prog-id=175 op=UNLOAD Dec 16 14:00:22.781915 kernel: audit: type=1334 audit(1765893622.556:561): prog-id=175 op=UNLOAD Dec 16 14:00:22.556000 audit[3854]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3367 pid=3854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:22.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635336337316536663338373132333638623964313965646261663931 Dec 16 14:00:22.556000 audit: BPF prog-id=177 op=LOAD Dec 16 14:00:22.556000 audit[3854]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=3367 pid=3854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:22.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635336337316536663338373132333638623964313965646261663931 Dec 16 14:00:22.892143 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 14:00:22.892330 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 14:00:23.212864 kubelet[2843]: I1216 14:00:23.212793 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmx72\" (UniqueName: \"kubernetes.io/projected/8027bcee-b711-46ed-b726-80ae4d1f28c2-kube-api-access-mmx72\") pod \"8027bcee-b711-46ed-b726-80ae4d1f28c2\" (UID: \"8027bcee-b711-46ed-b726-80ae4d1f28c2\") " Dec 16 14:00:23.213985 kubelet[2843]: I1216 14:00:23.213167 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8027bcee-b711-46ed-b726-80ae4d1f28c2-whisker-ca-bundle\") pod \"8027bcee-b711-46ed-b726-80ae4d1f28c2\" (UID: \"8027bcee-b711-46ed-b726-80ae4d1f28c2\") " Dec 16 14:00:23.214624 kubelet[2843]: I1216 14:00:23.214434 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8027bcee-b711-46ed-b726-80ae4d1f28c2-whisker-backend-key-pair\") pod \"8027bcee-b711-46ed-b726-80ae4d1f28c2\" (UID: \"8027bcee-b711-46ed-b726-80ae4d1f28c2\") " Dec 16 14:00:23.220374 kubelet[2843]: I1216 14:00:23.219846 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8027bcee-b711-46ed-b726-80ae4d1f28c2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8027bcee-b711-46ed-b726-80ae4d1f28c2" (UID: "8027bcee-b711-46ed-b726-80ae4d1f28c2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 14:00:23.220374 kubelet[2843]: I1216 14:00:23.220293 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8027bcee-b711-46ed-b726-80ae4d1f28c2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8027bcee-b711-46ed-b726-80ae4d1f28c2" (UID: "8027bcee-b711-46ed-b726-80ae4d1f28c2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 14:00:23.220857 kubelet[2843]: I1216 14:00:23.220735 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8027bcee-b711-46ed-b726-80ae4d1f28c2-kube-api-access-mmx72" (OuterVolumeSpecName: "kube-api-access-mmx72") pod "8027bcee-b711-46ed-b726-80ae4d1f28c2" (UID: "8027bcee-b711-46ed-b726-80ae4d1f28c2"). InnerVolumeSpecName "kube-api-access-mmx72". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 14:00:23.319392 systemd[1]: var-lib-kubelet-pods-8027bcee\x2db711\x2d46ed\x2db726\x2d80ae4d1f28c2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmmx72.mount: Deactivated successfully. Dec 16 14:00:23.320923 kubelet[2843]: I1216 14:00:23.320700 2843 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8027bcee-b711-46ed-b726-80ae4d1f28c2-whisker-ca-bundle\") on node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 14:00:23.320923 kubelet[2843]: I1216 14:00:23.320779 2843 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mmx72\" (UniqueName: \"kubernetes.io/projected/8027bcee-b711-46ed-b726-80ae4d1f28c2-kube-api-access-mmx72\") on node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 14:00:23.320923 kubelet[2843]: I1216 14:00:23.320803 2843 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8027bcee-b711-46ed-b726-80ae4d1f28c2-whisker-backend-key-pair\") on node \"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 14:00:23.321456 systemd[1]: var-lib-kubelet-pods-8027bcee\x2db711\x2d46ed\x2db726\x2d80ae4d1f28c2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 14:00:23.347376 systemd[1]: Removed slice kubepods-besteffort-pod8027bcee_b711_46ed_b726_80ae4d1f28c2.slice - libcontainer container kubepods-besteffort-pod8027bcee_b711_46ed_b726_80ae4d1f28c2.slice. Dec 16 14:00:23.389659 kubelet[2843]: I1216 14:00:23.389578 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7829c" podStartSLOduration=1.9934284839999998 podStartE2EDuration="27.389553009s" podCreationTimestamp="2025-12-16 13:59:56 +0000 UTC" firstStartedPulling="2025-12-16 13:59:56.959091611 +0000 UTC m=+27.126434883" lastFinishedPulling="2025-12-16 14:00:22.35521613 +0000 UTC m=+52.522559408" observedRunningTime="2025-12-16 14:00:23.357910982 +0000 UTC m=+53.525254270" watchObservedRunningTime="2025-12-16 14:00:23.389553009 +0000 UTC m=+53.556896299" Dec 16 14:00:23.454595 systemd[1]: Created slice kubepods-besteffort-pod4e05f7b2_49a2_4ebc_8e80_5e6f910c574a.slice - libcontainer container kubepods-besteffort-pod4e05f7b2_49a2_4ebc_8e80_5e6f910c574a.slice. Dec 16 14:00:23.523264 kubelet[2843]: I1216 14:00:23.523065 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgctf\" (UniqueName: \"kubernetes.io/projected/4e05f7b2-49a2-4ebc-8e80-5e6f910c574a-kube-api-access-wgctf\") pod \"whisker-7b5bfd867b-v5ftq\" (UID: \"4e05f7b2-49a2-4ebc-8e80-5e6f910c574a\") " pod="calico-system/whisker-7b5bfd867b-v5ftq" Dec 16 14:00:23.523264 kubelet[2843]: I1216 14:00:23.523144 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4e05f7b2-49a2-4ebc-8e80-5e6f910c574a-whisker-backend-key-pair\") pod \"whisker-7b5bfd867b-v5ftq\" (UID: \"4e05f7b2-49a2-4ebc-8e80-5e6f910c574a\") " pod="calico-system/whisker-7b5bfd867b-v5ftq" Dec 16 14:00:23.523264 kubelet[2843]: I1216 14:00:23.523185 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e05f7b2-49a2-4ebc-8e80-5e6f910c574a-whisker-ca-bundle\") pod \"whisker-7b5bfd867b-v5ftq\" (UID: \"4e05f7b2-49a2-4ebc-8e80-5e6f910c574a\") " pod="calico-system/whisker-7b5bfd867b-v5ftq" Dec 16 14:00:23.762847 containerd[1603]: time="2025-12-16T14:00:23.762732868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b5bfd867b-v5ftq,Uid:4e05f7b2-49a2-4ebc-8e80-5e6f910c574a,Namespace:calico-system,Attempt:0,}" Dec 16 14:00:23.924808 systemd-networkd[1502]: cali973889bab2b: Link UP Dec 16 14:00:23.926638 systemd-networkd[1502]: cali973889bab2b: Gained carrier Dec 16 14:00:23.951136 containerd[1603]: 2025-12-16 14:00:23.812 [INFO][3942] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 14:00:23.951136 containerd[1603]: 2025-12-16 14:00:23.826 [INFO][3942] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-whisker--7b5bfd867b--v5ftq-eth0 whisker-7b5bfd867b- calico-system 4e05f7b2-49a2-4ebc-8e80-5e6f910c574a 905 0 2025-12-16 14:00:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b5bfd867b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal whisker-7b5bfd867b-v5ftq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali973889bab2b [] [] }} ContainerID="f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" Namespace="calico-system" Pod="whisker-7b5bfd867b-v5ftq" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-whisker--7b5bfd867b--v5ftq-" Dec 16 14:00:23.951136 containerd[1603]: 2025-12-16 14:00:23.826 [INFO][3942] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" Namespace="calico-system" Pod="whisker-7b5bfd867b-v5ftq" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-whisker--7b5bfd867b--v5ftq-eth0" Dec 16 14:00:23.951136 containerd[1603]: 2025-12-16 14:00:23.864 [INFO][3955] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" HandleID="k8s-pod-network.f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-whisker--7b5bfd867b--v5ftq-eth0" Dec 16 14:00:23.951425 containerd[1603]: 2025-12-16 14:00:23.865 [INFO][3955] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" HandleID="k8s-pod-network.f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-whisker--7b5bfd867b--v5ftq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", "pod":"whisker-7b5bfd867b-v5ftq", "timestamp":"2025-12-16 14:00:23.864863308 +0000 UTC"}, Hostname:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:00:23.951425 containerd[1603]: 2025-12-16 14:00:23.865 [INFO][3955] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:00:23.951425 containerd[1603]: 2025-12-16 14:00:23.865 [INFO][3955] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:00:23.951425 containerd[1603]: 2025-12-16 14:00:23.865 [INFO][3955] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal' Dec 16 14:00:23.951425 containerd[1603]: 2025-12-16 14:00:23.873 [INFO][3955] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:23.951425 containerd[1603]: 2025-12-16 14:00:23.878 [INFO][3955] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:23.951425 containerd[1603]: 2025-12-16 14:00:23.888 [INFO][3955] ipam/ipam.go 511: Trying affinity for 192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:23.951425 containerd[1603]: 2025-12-16 14:00:23.892 [INFO][3955] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:23.951677 containerd[1603]: 2025-12-16 14:00:23.895 [INFO][3955] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:23.951677 containerd[1603]: 2025-12-16 14:00:23.895 [INFO][3955] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:23.951677 containerd[1603]: 2025-12-16 14:00:23.897 [INFO][3955] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7 Dec 16 14:00:23.951677 containerd[1603]: 2025-12-16 14:00:23.903 [INFO][3955] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:23.951677 containerd[1603]: 2025-12-16 14:00:23.908 [INFO][3955] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.1/26] block=192.168.23.0/26 handle="k8s-pod-network.f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:23.951677 containerd[1603]: 2025-12-16 14:00:23.909 [INFO][3955] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.1/26] handle="k8s-pod-network.f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:23.951677 containerd[1603]: 2025-12-16 14:00:23.909 [INFO][3955] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:00:23.951677 containerd[1603]: 2025-12-16 14:00:23.909 [INFO][3955] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.1/26] IPv6=[] ContainerID="f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" HandleID="k8s-pod-network.f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-whisker--7b5bfd867b--v5ftq-eth0" Dec 16 14:00:23.951952 containerd[1603]: 2025-12-16 14:00:23.912 [INFO][3942] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" Namespace="calico-system" Pod="whisker-7b5bfd867b-v5ftq" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-whisker--7b5bfd867b--v5ftq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-whisker--7b5bfd867b--v5ftq-eth0", GenerateName:"whisker-7b5bfd867b-", Namespace:"calico-system", SelfLink:"", UID:"4e05f7b2-49a2-4ebc-8e80-5e6f910c574a", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b5bfd867b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-7b5bfd867b-v5ftq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.23.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali973889bab2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:23.952032 containerd[1603]: 2025-12-16 14:00:23.913 [INFO][3942] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.1/32] ContainerID="f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" Namespace="calico-system" Pod="whisker-7b5bfd867b-v5ftq" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-whisker--7b5bfd867b--v5ftq-eth0" Dec 16 14:00:23.952032 containerd[1603]: 2025-12-16 14:00:23.913 [INFO][3942] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali973889bab2b ContainerID="f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" Namespace="calico-system" Pod="whisker-7b5bfd867b-v5ftq" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-whisker--7b5bfd867b--v5ftq-eth0" Dec 16 14:00:23.952032 containerd[1603]: 2025-12-16 14:00:23.927 [INFO][3942] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" Namespace="calico-system" Pod="whisker-7b5bfd867b-v5ftq" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-whisker--7b5bfd867b--v5ftq-eth0" Dec 16 14:00:23.952122 containerd[1603]: 2025-12-16 14:00:23.927 [INFO][3942] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" Namespace="calico-system" Pod="whisker-7b5bfd867b-v5ftq" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-whisker--7b5bfd867b--v5ftq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-whisker--7b5bfd867b--v5ftq-eth0", GenerateName:"whisker-7b5bfd867b-", Namespace:"calico-system", SelfLink:"", UID:"4e05f7b2-49a2-4ebc-8e80-5e6f910c574a", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 14, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b5bfd867b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7", Pod:"whisker-7b5bfd867b-v5ftq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.23.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali973889bab2b", MAC:"a2:76:8d:d9:60:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:23.952192 containerd[1603]: 2025-12-16 14:00:23.945 [INFO][3942] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" Namespace="calico-system" Pod="whisker-7b5bfd867b-v5ftq" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-whisker--7b5bfd867b--v5ftq-eth0" Dec 16 14:00:23.989966 containerd[1603]: time="2025-12-16T14:00:23.989838466Z" level=info msg="connecting to shim f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7" address="unix:///run/containerd/s/da84e4b57175ba2f7221e981783ad5bf3e12b4d0c35b8945343e8a63eb3ea944" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:00:24.024069 systemd[1]: Started cri-containerd-f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7.scope - libcontainer container f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7. Dec 16 14:00:24.036238 kubelet[2843]: I1216 14:00:24.036188 2843 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8027bcee-b711-46ed-b726-80ae4d1f28c2" path="/var/lib/kubelet/pods/8027bcee-b711-46ed-b726-80ae4d1f28c2/volumes" Dec 16 14:00:24.045000 audit: BPF prog-id=178 op=LOAD Dec 16 14:00:24.046000 audit: BPF prog-id=179 op=LOAD Dec 16 14:00:24.046000 audit[3988]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3977 pid=3988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:24.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630633461356331346336653630663065323830396334663961623736 Dec 16 14:00:24.046000 audit: BPF prog-id=179 op=UNLOAD Dec 16 14:00:24.046000 audit[3988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3977 pid=3988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:24.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630633461356331346336653630663065323830396334663961623736 Dec 16 14:00:24.046000 audit: BPF prog-id=180 op=LOAD Dec 16 14:00:24.046000 audit[3988]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3977 pid=3988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:24.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630633461356331346336653630663065323830396334663961623736 Dec 16 14:00:24.046000 audit: BPF prog-id=181 op=LOAD Dec 16 14:00:24.046000 audit[3988]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3977 pid=3988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:24.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630633461356331346336653630663065323830396334663961623736 Dec 16 14:00:24.046000 audit: BPF prog-id=181 op=UNLOAD Dec 16 14:00:24.046000 audit[3988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3977 pid=3988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:24.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630633461356331346336653630663065323830396334663961623736 Dec 16 14:00:24.046000 audit: BPF prog-id=180 op=UNLOAD Dec 16 14:00:24.046000 audit[3988]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3977 pid=3988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:24.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630633461356331346336653630663065323830396334663961623736 Dec 16 14:00:24.046000 audit: BPF prog-id=182 op=LOAD Dec 16 14:00:24.046000 audit[3988]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3977 pid=3988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:24.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630633461356331346336653630663065323830396334663961623736 Dec 16 14:00:24.101348 containerd[1603]: time="2025-12-16T14:00:24.101221827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b5bfd867b-v5ftq,Uid:4e05f7b2-49a2-4ebc-8e80-5e6f910c574a,Namespace:calico-system,Attempt:0,} returns sandbox id \"f0c4a5c14c6e60f0e2809c4f9ab7600d99b890d33b3771aa450c6f45c6fc35d7\"" Dec 16 14:00:24.104541 containerd[1603]: time="2025-12-16T14:00:24.104474547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 14:00:24.324476 containerd[1603]: time="2025-12-16T14:00:24.324239338Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:24.327580 containerd[1603]: time="2025-12-16T14:00:24.327467593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 14:00:24.327705 containerd[1603]: time="2025-12-16T14:00:24.327578139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:24.328016 kubelet[2843]: E1216 14:00:24.327938 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:00:24.329918 kubelet[2843]: E1216 14:00:24.328031 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:00:24.330249 kubelet[2843]: E1216 14:00:24.328323 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a4506af38c3640db88d3886b67f3e843,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wgctf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b5bfd867b-v5ftq_calico-system(4e05f7b2-49a2-4ebc-8e80-5e6f910c574a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:24.333655 containerd[1603]: time="2025-12-16T14:00:24.333571877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 14:00:24.810955 containerd[1603]: time="2025-12-16T14:00:24.810826583Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:24.813729 containerd[1603]: time="2025-12-16T14:00:24.813556323Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 14:00:24.813729 containerd[1603]: time="2025-12-16T14:00:24.813686014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:24.815143 kubelet[2843]: E1216 14:00:24.814008 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:00:24.815143 kubelet[2843]: E1216 14:00:24.814066 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:00:24.815143 kubelet[2843]: E1216 14:00:24.814220 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgctf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b5bfd867b-v5ftq_calico-system(4e05f7b2-49a2-4ebc-8e80-5e6f910c574a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:24.815934 kubelet[2843]: E1216 14:00:24.815797 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5bfd867b-v5ftq" podUID="4e05f7b2-49a2-4ebc-8e80-5e6f910c574a" Dec 16 14:00:25.032628 containerd[1603]: time="2025-12-16T14:00:25.032538577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d8d578c6b-htjdj,Uid:1769a332-0974-4355-84f4-605660a8e93f,Namespace:calico-system,Attempt:0,}" Dec 16 14:00:25.033273 containerd[1603]: time="2025-12-16T14:00:25.032475309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cd68bd7b-ds2n7,Uid:d41be582-68e3-4041-abac-e335f6c6ba13,Namespace:calico-apiserver,Attempt:0,}" Dec 16 14:00:25.070000 audit: BPF prog-id=183 op=LOAD Dec 16 14:00:25.070000 audit[4171]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1616b950 a2=98 a3=1fffffffffffffff items=0 ppid=4058 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.070000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 14:00:25.070000 audit: BPF prog-id=183 op=UNLOAD Dec 16 14:00:25.070000 audit[4171]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff1616b920 a3=0 items=0 ppid=4058 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.070000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 14:00:25.070000 audit: BPF prog-id=184 op=LOAD Dec 16 14:00:25.070000 audit[4171]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1616b830 a2=94 a3=3 items=0 ppid=4058 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.070000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 14:00:25.070000 audit: BPF prog-id=184 op=UNLOAD Dec 16 14:00:25.070000 audit[4171]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff1616b830 a2=94 a3=3 items=0 ppid=4058 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.070000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 14:00:25.070000 audit: BPF prog-id=185 op=LOAD Dec 16 14:00:25.070000 audit[4171]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1616b870 a2=94 a3=7fff1616ba50 items=0 ppid=4058 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.070000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 14:00:25.070000 audit: BPF prog-id=185 op=UNLOAD Dec 16 14:00:25.070000 audit[4171]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff1616b870 a2=94 a3=7fff1616ba50 items=0 ppid=4058 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.070000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 14:00:25.073000 audit: BPF prog-id=186 op=LOAD Dec 16 14:00:25.073000 audit[4173]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdc6e1b5a0 a2=98 a3=3 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.073000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.074000 audit: BPF prog-id=186 op=UNLOAD Dec 16 14:00:25.074000 audit[4173]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdc6e1b570 a3=0 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.074000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.074000 audit: BPF prog-id=187 op=LOAD Dec 16 14:00:25.074000 audit[4173]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdc6e1b390 a2=94 a3=54428f items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.074000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.074000 audit: BPF prog-id=187 op=UNLOAD Dec 16 14:00:25.074000 audit[4173]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdc6e1b390 a2=94 a3=54428f items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.074000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.074000 audit: BPF prog-id=188 op=LOAD Dec 16 14:00:25.074000 audit[4173]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdc6e1b3c0 a2=94 a3=2 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.074000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.075000 audit: BPF prog-id=188 op=UNLOAD Dec 16 14:00:25.075000 audit[4173]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdc6e1b3c0 a2=0 a3=2 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.075000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.316568 kubelet[2843]: E1216 14:00:25.316490 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5bfd867b-v5ftq" podUID="4e05f7b2-49a2-4ebc-8e80-5e6f910c574a" Dec 16 14:00:25.351815 systemd-networkd[1502]: cali176548b1225: Link UP Dec 16 14:00:25.354802 systemd-networkd[1502]: cali176548b1225: Gained carrier Dec 16 14:00:25.392835 containerd[1603]: 2025-12-16 14:00:25.195 [INFO][4161] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--ds2n7-eth0 calico-apiserver-66cd68bd7b- calico-apiserver d41be582-68e3-4041-abac-e335f6c6ba13 836 0 2025-12-16 13:59:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66cd68bd7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal calico-apiserver-66cd68bd7b-ds2n7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali176548b1225 [] [] }} ContainerID="3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-ds2n7" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--ds2n7-" Dec 16 14:00:25.392835 containerd[1603]: 2025-12-16 14:00:25.196 [INFO][4161] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-ds2n7" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--ds2n7-eth0" Dec 16 14:00:25.392835 containerd[1603]: 2025-12-16 14:00:25.262 [INFO][4189] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" HandleID="k8s-pod-network.3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--ds2n7-eth0" Dec 16 14:00:25.393182 containerd[1603]: 2025-12-16 14:00:25.264 [INFO][4189] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" HandleID="k8s-pod-network.3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--ds2n7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031d9a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", "pod":"calico-apiserver-66cd68bd7b-ds2n7", "timestamp":"2025-12-16 14:00:25.262054375 +0000 UTC"}, Hostname:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:00:25.393182 containerd[1603]: 2025-12-16 14:00:25.264 [INFO][4189] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:00:25.393182 containerd[1603]: 2025-12-16 14:00:25.264 [INFO][4189] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:00:25.393182 containerd[1603]: 2025-12-16 14:00:25.264 [INFO][4189] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal' Dec 16 14:00:25.393182 containerd[1603]: 2025-12-16 14:00:25.289 [INFO][4189] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.393182 containerd[1603]: 2025-12-16 14:00:25.294 [INFO][4189] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.393182 containerd[1603]: 2025-12-16 14:00:25.300 [INFO][4189] ipam/ipam.go 511: Trying affinity for 192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.393182 containerd[1603]: 2025-12-16 14:00:25.302 [INFO][4189] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.393580 containerd[1603]: 2025-12-16 14:00:25.306 [INFO][4189] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.393580 containerd[1603]: 2025-12-16 14:00:25.306 [INFO][4189] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.393580 containerd[1603]: 2025-12-16 14:00:25.308 [INFO][4189] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5 Dec 16 14:00:25.393580 containerd[1603]: 2025-12-16 14:00:25.319 [INFO][4189] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.393580 containerd[1603]: 2025-12-16 14:00:25.333 [INFO][4189] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.2/26] block=192.168.23.0/26 handle="k8s-pod-network.3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.393580 containerd[1603]: 2025-12-16 14:00:25.333 [INFO][4189] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.2/26] handle="k8s-pod-network.3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.393580 containerd[1603]: 2025-12-16 14:00:25.334 [INFO][4189] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:00:25.393580 containerd[1603]: 2025-12-16 14:00:25.334 [INFO][4189] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.2/26] IPv6=[] ContainerID="3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" HandleID="k8s-pod-network.3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--ds2n7-eth0" Dec 16 14:00:25.395692 containerd[1603]: 2025-12-16 14:00:25.340 [INFO][4161] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-ds2n7" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--ds2n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--ds2n7-eth0", GenerateName:"calico-apiserver-66cd68bd7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d41be582-68e3-4041-abac-e335f6c6ba13", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66cd68bd7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-66cd68bd7b-ds2n7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali176548b1225", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:25.396117 containerd[1603]: 2025-12-16 14:00:25.340 [INFO][4161] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.2/32] ContainerID="3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-ds2n7" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--ds2n7-eth0" Dec 16 14:00:25.396117 containerd[1603]: 2025-12-16 14:00:25.341 [INFO][4161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali176548b1225 ContainerID="3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-ds2n7" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--ds2n7-eth0" Dec 16 14:00:25.396117 containerd[1603]: 2025-12-16 14:00:25.356 [INFO][4161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-ds2n7" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--ds2n7-eth0" Dec 16 14:00:25.396526 containerd[1603]: 2025-12-16 14:00:25.357 [INFO][4161] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-ds2n7" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--ds2n7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--ds2n7-eth0", GenerateName:"calico-apiserver-66cd68bd7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d41be582-68e3-4041-abac-e335f6c6ba13", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66cd68bd7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5", Pod:"calico-apiserver-66cd68bd7b-ds2n7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali176548b1225", MAC:"7a:e1:5a:7d:b9:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:25.396526 containerd[1603]: 2025-12-16 14:00:25.386 [INFO][4161] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-ds2n7" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--ds2n7-eth0" Dec 16 14:00:25.416000 audit[4206]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:25.416000 audit[4206]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe755cef60 a2=0 a3=7ffe755cef4c items=0 ppid=3030 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:25.423000 audit[4206]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:25.423000 audit[4206]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe755cef60 a2=0 a3=0 items=0 ppid=3030 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.423000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:25.452150 containerd[1603]: time="2025-12-16T14:00:25.451930423Z" level=info msg="connecting to shim 3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5" address="unix:///run/containerd/s/46390f088b24b5adaa2f84d402bb207d1b4367f742b68bc2f79922b7eaa51caa" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:00:25.509422 systemd-networkd[1502]: calid52b3c9bcd4: Link UP Dec 16 14:00:25.510139 systemd[1]: Started cri-containerd-3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5.scope - libcontainer container 3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5. Dec 16 14:00:25.517975 systemd-networkd[1502]: calid52b3c9bcd4: Gained carrier Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.192 [INFO][4158] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--kube--controllers--6d8d578c6b--htjdj-eth0 calico-kube-controllers-6d8d578c6b- calico-system 1769a332-0974-4355-84f4-605660a8e93f 833 0 2025-12-16 13:59:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d8d578c6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal calico-kube-controllers-6d8d578c6b-htjdj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid52b3c9bcd4 [] [] }} ContainerID="6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" Namespace="calico-system" Pod="calico-kube-controllers-6d8d578c6b-htjdj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--kube--controllers--6d8d578c6b--htjdj-" Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.193 [INFO][4158] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" Namespace="calico-system" Pod="calico-kube-controllers-6d8d578c6b-htjdj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--kube--controllers--6d8d578c6b--htjdj-eth0" Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.288 [INFO][4187] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" HandleID="k8s-pod-network.6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--kube--controllers--6d8d578c6b--htjdj-eth0" Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.288 [INFO][4187] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" HandleID="k8s-pod-network.6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--kube--controllers--6d8d578c6b--htjdj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf890), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", "pod":"calico-kube-controllers-6d8d578c6b-htjdj", "timestamp":"2025-12-16 14:00:25.288388374 +0000 UTC"}, Hostname:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.288 [INFO][4187] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.334 [INFO][4187] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.334 [INFO][4187] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal' Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.389 [INFO][4187] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.406 [INFO][4187] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.413 [INFO][4187] ipam/ipam.go 511: Trying affinity for 192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.417 [INFO][4187] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.421 [INFO][4187] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.421 [INFO][4187] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.427 [INFO][4187] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8 Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.436 [INFO][4187] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.447 [INFO][4187] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.3/26] block=192.168.23.0/26 handle="k8s-pod-network.6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.447 [INFO][4187] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.3/26] handle="k8s-pod-network.6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.449 [INFO][4187] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:00:25.560573 containerd[1603]: 2025-12-16 14:00:25.450 [INFO][4187] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.3/26] IPv6=[] ContainerID="6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" HandleID="k8s-pod-network.6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--kube--controllers--6d8d578c6b--htjdj-eth0" Dec 16 14:00:25.562854 containerd[1603]: 2025-12-16 14:00:25.490 [INFO][4158] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" Namespace="calico-system" Pod="calico-kube-controllers-6d8d578c6b-htjdj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--kube--controllers--6d8d578c6b--htjdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--kube--controllers--6d8d578c6b--htjdj-eth0", GenerateName:"calico-kube-controllers-6d8d578c6b-", Namespace:"calico-system", SelfLink:"", UID:"1769a332-0974-4355-84f4-605660a8e93f", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 59, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d8d578c6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-6d8d578c6b-htjdj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid52b3c9bcd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:25.562854 containerd[1603]: 2025-12-16 14:00:25.491 [INFO][4158] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.3/32] ContainerID="6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" Namespace="calico-system" Pod="calico-kube-controllers-6d8d578c6b-htjdj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--kube--controllers--6d8d578c6b--htjdj-eth0" Dec 16 14:00:25.562854 containerd[1603]: 2025-12-16 14:00:25.491 [INFO][4158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid52b3c9bcd4 ContainerID="6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" Namespace="calico-system" Pod="calico-kube-controllers-6d8d578c6b-htjdj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--kube--controllers--6d8d578c6b--htjdj-eth0" Dec 16 14:00:25.562854 containerd[1603]: 2025-12-16 14:00:25.520 [INFO][4158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" Namespace="calico-system" Pod="calico-kube-controllers-6d8d578c6b-htjdj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--kube--controllers--6d8d578c6b--htjdj-eth0" Dec 16 14:00:25.562854 containerd[1603]: 2025-12-16 14:00:25.521 [INFO][4158] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" Namespace="calico-system" Pod="calico-kube-controllers-6d8d578c6b-htjdj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--kube--controllers--6d8d578c6b--htjdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--kube--controllers--6d8d578c6b--htjdj-eth0", GenerateName:"calico-kube-controllers-6d8d578c6b-", Namespace:"calico-system", SelfLink:"", UID:"1769a332-0974-4355-84f4-605660a8e93f", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 59, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d8d578c6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8", Pod:"calico-kube-controllers-6d8d578c6b-htjdj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid52b3c9bcd4", MAC:"26:fb:1d:4e:c2:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:25.562854 containerd[1603]: 2025-12-16 14:00:25.555 [INFO][4158] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" Namespace="calico-system" Pod="calico-kube-controllers-6d8d578c6b-htjdj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--kube--controllers--6d8d578c6b--htjdj-eth0" Dec 16 14:00:25.573000 audit: BPF prog-id=189 op=LOAD Dec 16 14:00:25.574000 audit: BPF prog-id=190 op=LOAD Dec 16 14:00:25.574000 audit[4230]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4219 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636631323465323030663335393363633031356231363162633338 Dec 16 14:00:25.574000 audit: BPF prog-id=190 op=UNLOAD Dec 16 14:00:25.574000 audit[4230]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4219 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636631323465323030663335393363633031356231363162633338 Dec 16 14:00:25.575000 audit: BPF prog-id=191 op=LOAD Dec 16 14:00:25.575000 audit[4230]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4219 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636631323465323030663335393363633031356231363162633338 Dec 16 14:00:25.575000 audit: BPF prog-id=192 op=LOAD Dec 16 14:00:25.575000 audit[4230]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4219 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636631323465323030663335393363633031356231363162633338 Dec 16 14:00:25.575000 audit: BPF prog-id=192 op=UNLOAD Dec 16 14:00:25.575000 audit[4230]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4219 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636631323465323030663335393363633031356231363162633338 Dec 16 14:00:25.575000 audit: BPF prog-id=191 op=UNLOAD Dec 16 14:00:25.575000 audit[4230]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4219 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636631323465323030663335393363633031356231363162633338 Dec 16 14:00:25.575000 audit: BPF prog-id=193 op=LOAD Dec 16 14:00:25.575000 audit[4230]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4219 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363636631323465323030663335393363633031356231363162633338 Dec 16 14:00:25.607833 containerd[1603]: time="2025-12-16T14:00:25.606822075Z" level=info msg="connecting to shim 6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8" address="unix:///run/containerd/s/ae8a51427653fd9cca44c34e8c87593f3986e5d2681285724bab1f5c7730eaa2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:00:25.669021 systemd[1]: Started cri-containerd-6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8.scope - libcontainer container 6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8. Dec 16 14:00:25.742000 audit: BPF prog-id=194 op=LOAD Dec 16 14:00:25.747815 containerd[1603]: time="2025-12-16T14:00:25.747699419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cd68bd7b-ds2n7,Uid:d41be582-68e3-4041-abac-e335f6c6ba13,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3ccf124e200f3593cc015b161bc382bba890d8430caecca7e0bc83a837a62be5\"" Dec 16 14:00:25.746000 audit: BPF prog-id=195 op=LOAD Dec 16 14:00:25.746000 audit[4275]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4264 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636323633363239343136393534353163323237313634356532313061 Dec 16 14:00:25.746000 audit: BPF prog-id=195 op=UNLOAD Dec 16 14:00:25.746000 audit[4275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4264 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636323633363239343136393534353163323237313634356532313061 Dec 16 14:00:25.747000 audit: BPF prog-id=196 op=LOAD Dec 16 14:00:25.747000 audit[4275]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4264 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636323633363239343136393534353163323237313634356532313061 Dec 16 14:00:25.748000 audit: BPF prog-id=197 op=LOAD Dec 16 14:00:25.748000 audit[4275]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4264 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636323633363239343136393534353163323237313634356532313061 Dec 16 14:00:25.749000 audit: BPF prog-id=197 op=UNLOAD Dec 16 14:00:25.749000 audit[4275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4264 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636323633363239343136393534353163323237313634356532313061 Dec 16 14:00:25.749000 audit: BPF prog-id=196 op=UNLOAD Dec 16 14:00:25.749000 audit[4275]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4264 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636323633363239343136393534353163323237313634356532313061 Dec 16 14:00:25.749000 audit: BPF prog-id=198 op=LOAD Dec 16 14:00:25.749000 audit[4275]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4264 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636323633363239343136393534353163323237313634356532313061 Dec 16 14:00:25.752431 containerd[1603]: time="2025-12-16T14:00:25.751913059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 14:00:25.844000 audit: BPF prog-id=199 op=LOAD Dec 16 14:00:25.844000 audit[4173]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdc6e1b280 a2=94 a3=1 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.844000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.844000 audit: BPF prog-id=199 op=UNLOAD Dec 16 14:00:25.844000 audit[4173]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdc6e1b280 a2=94 a3=1 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.844000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.856602 containerd[1603]: time="2025-12-16T14:00:25.856543630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d8d578c6b-htjdj,Uid:1769a332-0974-4355-84f4-605660a8e93f,Namespace:calico-system,Attempt:0,} returns sandbox id \"6626362941695451c2271645e210a1857de609fbf77cf88f84f569a88512dea8\"" Dec 16 14:00:25.867000 audit: BPF prog-id=200 op=LOAD Dec 16 14:00:25.867000 audit[4173]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdc6e1b270 a2=94 a3=4 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.867000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.867000 audit: BPF prog-id=200 op=UNLOAD Dec 16 14:00:25.867000 audit[4173]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdc6e1b270 a2=0 a3=4 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.867000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.868000 audit: BPF prog-id=201 op=LOAD Dec 16 14:00:25.868000 audit[4173]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdc6e1b0d0 a2=94 a3=5 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.868000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.869000 audit: BPF prog-id=201 op=UNLOAD Dec 16 14:00:25.869000 audit[4173]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdc6e1b0d0 a2=0 a3=5 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.869000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.869000 audit: BPF prog-id=202 op=LOAD Dec 16 14:00:25.869000 audit[4173]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdc6e1b2f0 a2=94 a3=6 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.869000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.869000 audit: BPF prog-id=202 op=UNLOAD Dec 16 14:00:25.869000 audit[4173]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdc6e1b2f0 a2=0 a3=6 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.869000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.869000 audit: BPF prog-id=203 op=LOAD Dec 16 14:00:25.869000 audit[4173]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdc6e1aaa0 a2=94 a3=88 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.869000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.870000 audit: BPF prog-id=204 op=LOAD Dec 16 14:00:25.870000 audit[4173]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffdc6e1a920 a2=94 a3=2 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.870000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.870000 audit: BPF prog-id=204 op=UNLOAD Dec 16 14:00:25.870000 audit[4173]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffdc6e1a950 a2=0 a3=7ffdc6e1aa50 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.870000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.870000 audit: BPF prog-id=203 op=UNLOAD Dec 16 14:00:25.870000 audit[4173]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3b54dd10 a2=0 a3=aca9bfd452401fc6 items=0 ppid=4058 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.870000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 14:00:25.890000 audit: BPF prog-id=205 op=LOAD Dec 16 14:00:25.890000 audit[4309]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff069b3290 a2=98 a3=1999999999999999 items=0 ppid=4058 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.890000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 14:00:25.890000 audit: BPF prog-id=205 op=UNLOAD Dec 16 14:00:25.890000 audit[4309]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff069b3260 a3=0 items=0 ppid=4058 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.890000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 14:00:25.891000 audit: BPF prog-id=206 op=LOAD Dec 16 14:00:25.891000 audit[4309]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff069b3170 a2=94 a3=ffff items=0 ppid=4058 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.891000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 14:00:25.891000 audit: BPF prog-id=206 op=UNLOAD Dec 16 14:00:25.891000 audit[4309]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff069b3170 a2=94 a3=ffff items=0 ppid=4058 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.891000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 14:00:25.891000 audit: BPF prog-id=207 op=LOAD Dec 16 14:00:25.891000 audit[4309]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff069b31b0 a2=94 a3=7fff069b3390 items=0 ppid=4058 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.891000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 14:00:25.891000 audit: BPF prog-id=207 op=UNLOAD Dec 16 14:00:25.891000 audit[4309]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff069b31b0 a2=94 a3=7fff069b3390 items=0 ppid=4058 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:25.891000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 14:00:25.915026 containerd[1603]: time="2025-12-16T14:00:25.914905229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:25.916988 containerd[1603]: time="2025-12-16T14:00:25.916823729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 14:00:25.916988 containerd[1603]: time="2025-12-16T14:00:25.916947564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:25.917517 kubelet[2843]: E1216 14:00:25.917395 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:00:25.917517 kubelet[2843]: E1216 14:00:25.917482 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:00:25.919777 kubelet[2843]: E1216 14:00:25.918333 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66cd68bd7b-ds2n7_calico-apiserver(d41be582-68e3-4041-abac-e335f6c6ba13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:25.920652 containerd[1603]: time="2025-12-16T14:00:25.920343840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 14:00:25.921255 kubelet[2843]: E1216 14:00:25.921088 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-ds2n7" podUID="d41be582-68e3-4041-abac-e335f6c6ba13" Dec 16 14:00:25.961071 systemd-networkd[1502]: cali973889bab2b: Gained IPv6LL Dec 16 14:00:26.034946 containerd[1603]: time="2025-12-16T14:00:26.033899367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tnb5b,Uid:3882b1ea-b8cc-4201-890d-09890488c736,Namespace:kube-system,Attempt:0,}" Dec 16 14:00:26.035123 containerd[1603]: time="2025-12-16T14:00:26.034255987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7s56z,Uid:fff9659c-3470-45ea-9613-14efa791d03c,Namespace:calico-system,Attempt:0,}" Dec 16 14:00:26.046644 systemd-networkd[1502]: vxlan.calico: Link UP Dec 16 14:00:26.046656 systemd-networkd[1502]: vxlan.calico: Gained carrier Dec 16 14:00:26.089312 containerd[1603]: time="2025-12-16T14:00:26.089244618Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:26.090968 containerd[1603]: time="2025-12-16T14:00:26.090838294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 14:00:26.091348 containerd[1603]: time="2025-12-16T14:00:26.091168949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:26.091761 kubelet[2843]: E1216 14:00:26.091686 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 14:00:26.092040 kubelet[2843]: E1216 14:00:26.091899 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 14:00:26.092823 kubelet[2843]: E1216 14:00:26.092709 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2cq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d8d578c6b-htjdj_calico-system(1769a332-0974-4355-84f4-605660a8e93f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:26.094186 kubelet[2843]: E1216 14:00:26.094108 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d8d578c6b-htjdj" podUID="1769a332-0974-4355-84f4-605660a8e93f" Dec 16 14:00:26.097000 audit: BPF prog-id=208 op=LOAD Dec 16 14:00:26.097000 audit[4352]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc847b29c0 a2=98 a3=0 items=0 ppid=4058 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.097000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:00:26.099000 audit: BPF prog-id=208 op=UNLOAD Dec 16 14:00:26.099000 audit[4352]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc847b2990 a3=0 items=0 ppid=4058 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.099000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:00:26.099000 audit: BPF prog-id=209 op=LOAD Dec 16 14:00:26.099000 audit[4352]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc847b27d0 a2=94 a3=54428f items=0 ppid=4058 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.099000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:00:26.099000 audit: BPF prog-id=209 op=UNLOAD Dec 16 14:00:26.099000 audit[4352]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc847b27d0 a2=94 a3=54428f items=0 ppid=4058 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.099000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:00:26.100000 audit: BPF prog-id=210 op=LOAD Dec 16 14:00:26.100000 audit[4352]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc847b2800 a2=94 a3=2 items=0 ppid=4058 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.100000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:00:26.100000 audit: BPF prog-id=210 op=UNLOAD Dec 16 14:00:26.100000 audit[4352]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc847b2800 a2=0 a3=2 items=0 ppid=4058 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.100000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:00:26.100000 audit: BPF prog-id=211 op=LOAD Dec 16 14:00:26.100000 audit[4352]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc847b25b0 a2=94 a3=4 items=0 ppid=4058 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.100000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:00:26.101000 audit: BPF prog-id=211 op=UNLOAD Dec 16 14:00:26.101000 audit[4352]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc847b25b0 a2=94 a3=4 items=0 ppid=4058 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.101000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:00:26.101000 audit: BPF prog-id=212 op=LOAD Dec 16 14:00:26.101000 audit[4352]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc847b26b0 a2=94 a3=7ffc847b2830 items=0 ppid=4058 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.101000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:00:26.101000 audit: BPF prog-id=212 op=UNLOAD Dec 16 14:00:26.101000 audit[4352]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc847b26b0 a2=0 a3=7ffc847b2830 items=0 ppid=4058 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.101000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:00:26.104000 audit: BPF prog-id=213 op=LOAD Dec 16 14:00:26.104000 audit[4352]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc847b1de0 a2=94 a3=2 items=0 ppid=4058 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.104000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:00:26.105000 audit: BPF prog-id=213 op=UNLOAD Dec 16 14:00:26.105000 audit[4352]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc847b1de0 a2=0 a3=2 items=0 ppid=4058 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.105000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:00:26.105000 audit: BPF prog-id=214 op=LOAD Dec 16 14:00:26.105000 audit[4352]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc847b1ee0 a2=94 a3=30 items=0 ppid=4058 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.105000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 14:00:26.122000 audit: BPF prog-id=215 op=LOAD Dec 16 14:00:26.122000 audit[4356]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc5c17b450 a2=98 a3=0 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.122000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.126000 audit: BPF prog-id=215 op=UNLOAD Dec 16 14:00:26.126000 audit[4356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc5c17b420 a3=0 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.126000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.126000 audit: BPF prog-id=216 op=LOAD Dec 16 14:00:26.126000 audit[4356]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc5c17b240 a2=94 a3=54428f items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.126000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.126000 audit: BPF prog-id=216 op=UNLOAD Dec 16 14:00:26.126000 audit[4356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc5c17b240 a2=94 a3=54428f items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.126000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.126000 audit: BPF prog-id=217 op=LOAD Dec 16 14:00:26.126000 audit[4356]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc5c17b270 a2=94 a3=2 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.126000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.126000 audit: BPF prog-id=217 op=UNLOAD Dec 16 14:00:26.126000 audit[4356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc5c17b270 a2=0 a3=2 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.126000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.323098 kubelet[2843]: E1216 14:00:26.322466 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d8d578c6b-htjdj" podUID="1769a332-0974-4355-84f4-605660a8e93f" Dec 16 14:00:26.329547 kubelet[2843]: E1216 14:00:26.329504 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-ds2n7" podUID="d41be582-68e3-4041-abac-e335f6c6ba13" Dec 16 14:00:26.378509 systemd-networkd[1502]: cali463dc795227: Link UP Dec 16 14:00:26.381671 systemd-networkd[1502]: cali463dc795227: Gained carrier Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.187 [INFO][4333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-csi--node--driver--7s56z-eth0 csi-node-driver- calico-system fff9659c-3470-45ea-9613-14efa791d03c 714 0 2025-12-16 13:59:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal csi-node-driver-7s56z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali463dc795227 [] [] }} ContainerID="c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" Namespace="calico-system" Pod="csi-node-driver-7s56z" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-csi--node--driver--7s56z-" Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.187 [INFO][4333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" Namespace="calico-system" Pod="csi-node-driver-7s56z" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-csi--node--driver--7s56z-eth0" Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.279 [INFO][4363] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" HandleID="k8s-pod-network.c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-csi--node--driver--7s56z-eth0" Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.281 [INFO][4363] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" HandleID="k8s-pod-network.c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-csi--node--driver--7s56z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d2380), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", "pod":"csi-node-driver-7s56z", "timestamp":"2025-12-16 14:00:26.279731388 +0000 UTC"}, Hostname:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.282 [INFO][4363] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.283 [INFO][4363] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.283 [INFO][4363] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal' Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.298 [INFO][4363] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.304 [INFO][4363] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.310 [INFO][4363] ipam/ipam.go 511: Trying affinity for 192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.315 [INFO][4363] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.323 [INFO][4363] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.324 [INFO][4363] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.331 [INFO][4363] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.347 [INFO][4363] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.361 [INFO][4363] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.4/26] block=192.168.23.0/26 handle="k8s-pod-network.c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.361 [INFO][4363] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.4/26] handle="k8s-pod-network.c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.361 [INFO][4363] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:00:26.405929 containerd[1603]: 2025-12-16 14:00:26.361 [INFO][4363] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.4/26] IPv6=[] ContainerID="c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" HandleID="k8s-pod-network.c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-csi--node--driver--7s56z-eth0" Dec 16 14:00:26.411026 containerd[1603]: 2025-12-16 14:00:26.368 [INFO][4333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" Namespace="calico-system" Pod="csi-node-driver-7s56z" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-csi--node--driver--7s56z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-csi--node--driver--7s56z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fff9659c-3470-45ea-9613-14efa791d03c", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 59, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-7s56z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali463dc795227", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:26.411026 containerd[1603]: 2025-12-16 14:00:26.369 [INFO][4333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.4/32] ContainerID="c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" Namespace="calico-system" Pod="csi-node-driver-7s56z" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-csi--node--driver--7s56z-eth0" Dec 16 14:00:26.411026 containerd[1603]: 2025-12-16 14:00:26.370 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali463dc795227 ContainerID="c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" Namespace="calico-system" Pod="csi-node-driver-7s56z" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-csi--node--driver--7s56z-eth0" Dec 16 14:00:26.411026 containerd[1603]: 2025-12-16 14:00:26.382 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" Namespace="calico-system" Pod="csi-node-driver-7s56z" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-csi--node--driver--7s56z-eth0" Dec 16 14:00:26.411026 containerd[1603]: 2025-12-16 14:00:26.384 [INFO][4333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" Namespace="calico-system" Pod="csi-node-driver-7s56z" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-csi--node--driver--7s56z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-csi--node--driver--7s56z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fff9659c-3470-45ea-9613-14efa791d03c", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 59, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a", Pod:"csi-node-driver-7s56z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali463dc795227", MAC:"d2:97:9c:1c:0c:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:26.411026 containerd[1603]: 2025-12-16 14:00:26.402 [INFO][4333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" Namespace="calico-system" Pod="csi-node-driver-7s56z" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-csi--node--driver--7s56z-eth0" Dec 16 14:00:26.439000 audit[4386]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:26.439000 audit[4386]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeeca8cf60 a2=0 a3=7ffeeca8cf4c items=0 ppid=3030 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.439000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:26.446000 audit[4386]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:26.446000 audit[4386]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeeca8cf60 a2=0 a3=0 items=0 ppid=3030 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.446000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:26.485569 containerd[1603]: time="2025-12-16T14:00:26.485447045Z" level=info msg="connecting to shim c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a" address="unix:///run/containerd/s/0cfd82990339e7bd05a188c84f32528972adda91a1b922854ba0ee3f4599c90f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:00:26.495457 systemd-networkd[1502]: caliba1f2f5ce15: Link UP Dec 16 14:00:26.497925 systemd-networkd[1502]: caliba1f2f5ce15: Gained carrier Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.215 [INFO][4330] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--tnb5b-eth0 coredns-668d6bf9bc- kube-system 3882b1ea-b8cc-4201-890d-09890488c736 831 0 2025-12-16 13:59:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal coredns-668d6bf9bc-tnb5b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliba1f2f5ce15 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" Namespace="kube-system" Pod="coredns-668d6bf9bc-tnb5b" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--tnb5b-" Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.216 [INFO][4330] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" Namespace="kube-system" Pod="coredns-668d6bf9bc-tnb5b" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--tnb5b-eth0" Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.293 [INFO][4369] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" HandleID="k8s-pod-network.01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--tnb5b-eth0" Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.293 [INFO][4369] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" HandleID="k8s-pod-network.01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--tnb5b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf910), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", "pod":"coredns-668d6bf9bc-tnb5b", "timestamp":"2025-12-16 14:00:26.293053375 +0000 UTC"}, Hostname:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.293 [INFO][4369] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.362 [INFO][4369] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.362 [INFO][4369] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal' Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.401 [INFO][4369] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.411 [INFO][4369] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.426 [INFO][4369] ipam/ipam.go 511: Trying affinity for 192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.428 [INFO][4369] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.431 [INFO][4369] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.432 [INFO][4369] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.434 [INFO][4369] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027 Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.445 [INFO][4369] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.473 [INFO][4369] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.5/26] block=192.168.23.0/26 handle="k8s-pod-network.01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.473 [INFO][4369] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.5/26] handle="k8s-pod-network.01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.474 [INFO][4369] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:00:26.532651 containerd[1603]: 2025-12-16 14:00:26.474 [INFO][4369] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.5/26] IPv6=[] ContainerID="01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" HandleID="k8s-pod-network.01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--tnb5b-eth0" Dec 16 14:00:26.535467 containerd[1603]: 2025-12-16 14:00:26.483 [INFO][4330] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" Namespace="kube-system" Pod="coredns-668d6bf9bc-tnb5b" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--tnb5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--tnb5b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3882b1ea-b8cc-4201-890d-09890488c736", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 59, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-668d6bf9bc-tnb5b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba1f2f5ce15", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:26.535467 containerd[1603]: 2025-12-16 14:00:26.484 [INFO][4330] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.5/32] ContainerID="01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" Namespace="kube-system" Pod="coredns-668d6bf9bc-tnb5b" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--tnb5b-eth0" Dec 16 14:00:26.535467 containerd[1603]: 2025-12-16 14:00:26.486 [INFO][4330] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba1f2f5ce15 ContainerID="01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" Namespace="kube-system" Pod="coredns-668d6bf9bc-tnb5b" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--tnb5b-eth0" Dec 16 14:00:26.535467 containerd[1603]: 2025-12-16 14:00:26.492 [INFO][4330] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" Namespace="kube-system" Pod="coredns-668d6bf9bc-tnb5b" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--tnb5b-eth0" Dec 16 14:00:26.535467 containerd[1603]: 2025-12-16 14:00:26.492 [INFO][4330] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" Namespace="kube-system" Pod="coredns-668d6bf9bc-tnb5b" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--tnb5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--tnb5b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3882b1ea-b8cc-4201-890d-09890488c736", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 59, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027", Pod:"coredns-668d6bf9bc-tnb5b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba1f2f5ce15", MAC:"7e:1b:c5:44:19:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:26.535467 containerd[1603]: 2025-12-16 14:00:26.524 [INFO][4330] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" Namespace="kube-system" Pod="coredns-668d6bf9bc-tnb5b" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--tnb5b-eth0" Dec 16 14:00:26.588986 systemd[1]: Started cri-containerd-c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a.scope - libcontainer container c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a. Dec 16 14:00:26.604918 containerd[1603]: time="2025-12-16T14:00:26.604834773Z" level=info msg="connecting to shim 01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027" address="unix:///run/containerd/s/528fbe188d6b78fc4b2dd4439c10472eaf40464591f398ac367da6e0fd943976" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:00:26.644000 audit: BPF prog-id=218 op=LOAD Dec 16 14:00:26.645000 audit: BPF prog-id=219 op=LOAD Dec 16 14:00:26.645000 audit[4410]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663064353239313562383063366265356364353863356330373838 Dec 16 14:00:26.645000 audit: BPF prog-id=219 op=UNLOAD Dec 16 14:00:26.645000 audit[4410]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663064353239313562383063366265356364353863356330373838 Dec 16 14:00:26.646000 audit: BPF prog-id=220 op=LOAD Dec 16 14:00:26.646000 audit[4410]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663064353239313562383063366265356364353863356330373838 Dec 16 14:00:26.646000 audit: BPF prog-id=221 op=LOAD Dec 16 14:00:26.646000 audit[4410]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663064353239313562383063366265356364353863356330373838 Dec 16 14:00:26.646000 audit: BPF prog-id=221 op=UNLOAD Dec 16 14:00:26.646000 audit[4410]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663064353239313562383063366265356364353863356330373838 Dec 16 14:00:26.646000 audit: BPF prog-id=220 op=UNLOAD Dec 16 14:00:26.646000 audit[4410]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663064353239313562383063366265356364353863356330373838 Dec 16 14:00:26.647000 audit: BPF prog-id=222 op=LOAD Dec 16 14:00:26.647000 audit[4410]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6331663064353239313562383063366265356364353863356330373838 Dec 16 14:00:26.675219 systemd[1]: Started cri-containerd-01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027.scope - libcontainer container 01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027. Dec 16 14:00:26.708000 audit: BPF prog-id=223 op=LOAD Dec 16 14:00:26.709000 audit: BPF prog-id=224 op=LOAD Dec 16 14:00:26.709000 audit[4454]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4436 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031623363326464646131313233356162626430333063653463323961 Dec 16 14:00:26.709000 audit: BPF prog-id=224 op=UNLOAD Dec 16 14:00:26.709000 audit[4454]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4436 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031623363326464646131313233356162626430333063653463323961 Dec 16 14:00:26.711000 audit: BPF prog-id=225 op=LOAD Dec 16 14:00:26.711000 audit[4454]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4436 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031623363326464646131313233356162626430333063653463323961 Dec 16 14:00:26.711000 audit: BPF prog-id=226 op=LOAD Dec 16 14:00:26.711000 audit[4454]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4436 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031623363326464646131313233356162626430333063653463323961 Dec 16 14:00:26.712846 containerd[1603]: time="2025-12-16T14:00:26.712684582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7s56z,Uid:fff9659c-3470-45ea-9613-14efa791d03c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1f0d52915b80c6be5cd58c5c0788953274e263f67482e188f109ed1d478ea7a\"" Dec 16 14:00:26.712000 audit: BPF prog-id=226 op=UNLOAD Dec 16 14:00:26.712000 audit[4454]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4436 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031623363326464646131313233356162626430333063653463323961 Dec 16 14:00:26.712000 audit: BPF prog-id=225 op=UNLOAD Dec 16 14:00:26.712000 audit[4454]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4436 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031623363326464646131313233356162626430333063653463323961 Dec 16 14:00:26.712000 audit: BPF prog-id=227 op=LOAD Dec 16 14:00:26.712000 audit[4454]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4436 pid=4454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031623363326464646131313233356162626430333063653463323961 Dec 16 14:00:26.716635 containerd[1603]: time="2025-12-16T14:00:26.716489940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 14:00:26.784949 containerd[1603]: time="2025-12-16T14:00:26.784716461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tnb5b,Uid:3882b1ea-b8cc-4201-890d-09890488c736,Namespace:kube-system,Attempt:0,} returns sandbox id \"01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027\"" Dec 16 14:00:26.793283 containerd[1603]: time="2025-12-16T14:00:26.793126889Z" level=info msg="CreateContainer within sandbox \"01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 14:00:26.805770 containerd[1603]: time="2025-12-16T14:00:26.805532946Z" level=info msg="Container f81b5c958b9c342410a29128a7614c1079d35f8eed62a2b4229fe2ed28e2522c: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:00:26.815000 audit: BPF prog-id=228 op=LOAD Dec 16 14:00:26.815000 audit[4356]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc5c17b130 a2=94 a3=1 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.815000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.815000 audit: BPF prog-id=228 op=UNLOAD Dec 16 14:00:26.815000 audit[4356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc5c17b130 a2=94 a3=1 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.815000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.842399 containerd[1603]: time="2025-12-16T14:00:26.842318916Z" level=info msg="CreateContainer within sandbox \"01b3c2ddda11235abbd030ce4c29a40c656975daaba706e6b0483a975a6e9027\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f81b5c958b9c342410a29128a7614c1079d35f8eed62a2b4229fe2ed28e2522c\"" Dec 16 14:00:26.844968 containerd[1603]: time="2025-12-16T14:00:26.844912304Z" level=info msg="StartContainer for \"f81b5c958b9c342410a29128a7614c1079d35f8eed62a2b4229fe2ed28e2522c\"" Dec 16 14:00:26.846295 containerd[1603]: time="2025-12-16T14:00:26.846243811Z" level=info msg="connecting to shim f81b5c958b9c342410a29128a7614c1079d35f8eed62a2b4229fe2ed28e2522c" address="unix:///run/containerd/s/528fbe188d6b78fc4b2dd4439c10472eaf40464591f398ac367da6e0fd943976" protocol=ttrpc version=3 Dec 16 14:00:26.864000 audit: BPF prog-id=229 op=LOAD Dec 16 14:00:26.864000 audit[4356]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc5c17b120 a2=94 a3=4 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.864000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.864000 audit: BPF prog-id=229 op=UNLOAD Dec 16 14:00:26.864000 audit[4356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc5c17b120 a2=0 a3=4 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.864000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.876000 audit: BPF prog-id=230 op=LOAD Dec 16 14:00:26.876000 audit[4356]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc5c17af80 a2=94 a3=5 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.876000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.877000 audit: BPF prog-id=230 op=UNLOAD Dec 16 14:00:26.877000 audit[4356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc5c17af80 a2=0 a3=5 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.877000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.877000 audit: BPF prog-id=231 op=LOAD Dec 16 14:00:26.877000 audit[4356]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc5c17b1a0 a2=94 a3=6 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.877000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.878000 audit: BPF prog-id=231 op=UNLOAD Dec 16 14:00:26.878000 audit[4356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc5c17b1a0 a2=0 a3=6 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.878000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.881050 systemd[1]: Started cri-containerd-f81b5c958b9c342410a29128a7614c1079d35f8eed62a2b4229fe2ed28e2522c.scope - libcontainer container f81b5c958b9c342410a29128a7614c1079d35f8eed62a2b4229fe2ed28e2522c. Dec 16 14:00:26.878000 audit: BPF prog-id=232 op=LOAD Dec 16 14:00:26.878000 audit[4356]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc5c17a950 a2=94 a3=88 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.878000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.882000 audit: BPF prog-id=233 op=LOAD Dec 16 14:00:26.882000 audit[4356]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc5c17a7d0 a2=94 a3=2 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.882000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.882000 audit: BPF prog-id=233 op=UNLOAD Dec 16 14:00:26.882000 audit[4356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc5c17a800 a2=0 a3=7ffc5c17a900 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.882000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.882000 audit: BPF prog-id=232 op=UNLOAD Dec 16 14:00:26.882000 audit[4356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=12ea7d10 a2=0 a3=d64d9dd735d4cdf2 items=0 ppid=4058 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.882000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 14:00:26.897000 audit: BPF prog-id=214 op=UNLOAD Dec 16 14:00:26.897000 audit[4058]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00137f840 a2=0 a3=0 items=0 ppid=4040 pid=4058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.897000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 14:00:26.901419 containerd[1603]: time="2025-12-16T14:00:26.901370265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:26.902913 containerd[1603]: time="2025-12-16T14:00:26.902868846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:26.903069 containerd[1603]: time="2025-12-16T14:00:26.902910234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 14:00:26.903905 kubelet[2843]: E1216 14:00:26.903438 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 14:00:26.903905 kubelet[2843]: E1216 14:00:26.903596 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 14:00:26.905207 kubelet[2843]: E1216 14:00:26.905058 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd72z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7s56z_calico-system(fff9659c-3470-45ea-9613-14efa791d03c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:26.908374 containerd[1603]: time="2025-12-16T14:00:26.908297585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 14:00:26.918000 audit: BPF prog-id=234 op=LOAD Dec 16 14:00:26.919000 audit: BPF prog-id=235 op=LOAD Dec 16 14:00:26.919000 audit[4487]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4436 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.919000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638316235633935386239633334323431306132393132386137363134 Dec 16 14:00:26.920000 audit: BPF prog-id=235 op=UNLOAD Dec 16 14:00:26.920000 audit[4487]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4436 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638316235633935386239633334323431306132393132386137363134 Dec 16 14:00:26.920000 audit: BPF prog-id=236 op=LOAD Dec 16 14:00:26.920000 audit[4487]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4436 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638316235633935386239633334323431306132393132386137363134 Dec 16 14:00:26.920000 audit: BPF prog-id=237 op=LOAD Dec 16 14:00:26.920000 audit[4487]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4436 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638316235633935386239633334323431306132393132386137363134 Dec 16 14:00:26.921000 audit: BPF prog-id=237 op=UNLOAD Dec 16 14:00:26.921000 audit[4487]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4436 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638316235633935386239633334323431306132393132386137363134 Dec 16 14:00:26.921000 audit: BPF prog-id=236 op=UNLOAD Dec 16 14:00:26.921000 audit[4487]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4436 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638316235633935386239633334323431306132393132386137363134 Dec 16 14:00:26.921000 audit: BPF prog-id=238 op=LOAD Dec 16 14:00:26.921000 audit[4487]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4436 pid=4487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:26.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638316235633935386239633334323431306132393132386137363134 Dec 16 14:00:26.985100 systemd-networkd[1502]: calid52b3c9bcd4: Gained IPv6LL Dec 16 14:00:26.987150 containerd[1603]: time="2025-12-16T14:00:26.985809121Z" level=info msg="StartContainer for \"f81b5c958b9c342410a29128a7614c1079d35f8eed62a2b4229fe2ed28e2522c\" returns successfully" Dec 16 14:00:27.031110 containerd[1603]: time="2025-12-16T14:00:27.031060485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fdrzj,Uid:61b00035-2995-44f7-ae62-3ec89692e439,Namespace:calico-system,Attempt:0,}" Dec 16 14:00:27.126368 containerd[1603]: time="2025-12-16T14:00:27.126280815Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:27.131883 containerd[1603]: time="2025-12-16T14:00:27.130988561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 14:00:27.133920 containerd[1603]: time="2025-12-16T14:00:27.132108985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:27.134636 kubelet[2843]: E1216 14:00:27.134579 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 14:00:27.135206 kubelet[2843]: E1216 14:00:27.134650 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 14:00:27.137209 kubelet[2843]: E1216 14:00:27.137118 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd72z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7s56z_calico-system(fff9659c-3470-45ea-9613-14efa791d03c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:27.139932 kubelet[2843]: E1216 14:00:27.139861 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 14:00:27.241132 systemd-networkd[1502]: cali176548b1225: Gained IPv6LL Dec 16 14:00:27.273000 audit[4565]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=4565 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 14:00:27.273000 audit[4565]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffebd28b5c0 a2=0 a3=7ffebd28b5ac items=0 ppid=4058 pid=4565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:27.274198 systemd-networkd[1502]: califd22505c796: Link UP Dec 16 14:00:27.273000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 14:00:27.277840 systemd-networkd[1502]: califd22505c796: Gained carrier Dec 16 14:00:27.285000 audit[4568]: NETFILTER_CFG table=nat:126 family=2 entries=15 op=nft_register_chain pid=4568 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 14:00:27.285000 audit[4568]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc6398bd80 a2=0 a3=7ffc6398bd6c items=0 ppid=4058 pid=4568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:27.285000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 14:00:27.287000 audit[4564]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=4564 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 14:00:27.287000 audit[4564]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff2ff6fea0 a2=0 a3=7fff2ff6fe8c items=0 ppid=4058 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:27.287000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.117 [INFO][4531] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-goldmane--666569f655--fdrzj-eth0 goldmane-666569f655- calico-system 61b00035-2995-44f7-ae62-3ec89692e439 832 0 2025-12-16 13:59:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal goldmane-666569f655-fdrzj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califd22505c796 [] [] }} ContainerID="b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" Namespace="calico-system" Pod="goldmane-666569f655-fdrzj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-goldmane--666569f655--fdrzj-" Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.118 [INFO][4531] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" Namespace="calico-system" Pod="goldmane-666569f655-fdrzj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-goldmane--666569f655--fdrzj-eth0" Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.198 [INFO][4550] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" HandleID="k8s-pod-network.b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-goldmane--666569f655--fdrzj-eth0" Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.199 [INFO][4550] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" HandleID="k8s-pod-network.b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-goldmane--666569f655--fdrzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", "pod":"goldmane-666569f655-fdrzj", "timestamp":"2025-12-16 14:00:27.198004924 +0000 UTC"}, Hostname:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.199 [INFO][4550] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.199 [INFO][4550] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.199 [INFO][4550] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal' Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.210 [INFO][4550] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.217 [INFO][4550] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.223 [INFO][4550] ipam/ipam.go 511: Trying affinity for 192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.226 [INFO][4550] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.231 [INFO][4550] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.234 [INFO][4550] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.238 [INFO][4550] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01 Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.245 [INFO][4550] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.256 [INFO][4550] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.6/26] block=192.168.23.0/26 handle="k8s-pod-network.b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.256 [INFO][4550] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.6/26] handle="k8s-pod-network.b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.257 [INFO][4550] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:00:27.319579 containerd[1603]: 2025-12-16 14:00:27.257 [INFO][4550] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.6/26] IPv6=[] ContainerID="b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" HandleID="k8s-pod-network.b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-goldmane--666569f655--fdrzj-eth0" Dec 16 14:00:27.325347 containerd[1603]: 2025-12-16 14:00:27.263 [INFO][4531] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" Namespace="calico-system" Pod="goldmane-666569f655-fdrzj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-goldmane--666569f655--fdrzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-goldmane--666569f655--fdrzj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"61b00035-2995-44f7-ae62-3ec89692e439", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-666569f655-fdrzj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.23.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califd22505c796", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:27.325347 containerd[1603]: 2025-12-16 14:00:27.264 [INFO][4531] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.6/32] ContainerID="b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" Namespace="calico-system" Pod="goldmane-666569f655-fdrzj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-goldmane--666569f655--fdrzj-eth0" Dec 16 14:00:27.325347 containerd[1603]: 2025-12-16 14:00:27.264 [INFO][4531] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd22505c796 ContainerID="b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" Namespace="calico-system" Pod="goldmane-666569f655-fdrzj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-goldmane--666569f655--fdrzj-eth0" Dec 16 14:00:27.325347 containerd[1603]: 2025-12-16 14:00:27.282 [INFO][4531] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" Namespace="calico-system" Pod="goldmane-666569f655-fdrzj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-goldmane--666569f655--fdrzj-eth0" Dec 16 14:00:27.325347 containerd[1603]: 2025-12-16 14:00:27.286 [INFO][4531] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" Namespace="calico-system" Pod="goldmane-666569f655-fdrzj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-goldmane--666569f655--fdrzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-goldmane--666569f655--fdrzj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"61b00035-2995-44f7-ae62-3ec89692e439", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01", Pod:"goldmane-666569f655-fdrzj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.23.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califd22505c796", MAC:"a2:dd:17:e7:88:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:27.325347 containerd[1603]: 2025-12-16 14:00:27.314 [INFO][4531] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" Namespace="calico-system" Pod="goldmane-666569f655-fdrzj" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-goldmane--666569f655--fdrzj-eth0" Dec 16 14:00:27.305000 audit[4567]: NETFILTER_CFG table=filter:128 family=2 entries=164 op=nft_register_chain pid=4567 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 14:00:27.305000 audit[4567]: SYSCALL arch=c000003e syscall=46 success=yes exit=95100 a0=3 a1=7ffed7951ac0 a2=0 a3=7ffed7951aac items=0 ppid=4058 pid=4567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:27.305000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 14:00:27.345884 kubelet[2843]: E1216 14:00:27.345724 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-ds2n7" podUID="d41be582-68e3-4041-abac-e335f6c6ba13" Dec 16 14:00:27.346599 kubelet[2843]: E1216 14:00:27.346383 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d8d578c6b-htjdj" podUID="1769a332-0974-4355-84f4-605660a8e93f" Dec 16 14:00:27.349089 kubelet[2843]: E1216 14:00:27.348815 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 14:00:27.366292 containerd[1603]: time="2025-12-16T14:00:27.366005127Z" level=info msg="connecting to shim b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01" address="unix:///run/containerd/s/7ff9d3d69ab9cb254fdbfa7ad84145c556d57b389a3eef934fe7d978493798c5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:00:27.420161 systemd[1]: Started cri-containerd-b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01.scope - libcontainer container b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01. Dec 16 14:00:27.442409 kubelet[2843]: I1216 14:00:27.441094 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tnb5b" podStartSLOduration=50.441069164 podStartE2EDuration="50.441069164s" podCreationTimestamp="2025-12-16 13:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 14:00:27.387958707 +0000 UTC m=+57.555301995" watchObservedRunningTime="2025-12-16 14:00:27.441069164 +0000 UTC m=+57.608412452" Dec 16 14:00:27.487000 audit[4623]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:27.487000 audit[4623]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff44303910 a2=0 a3=7fff443038fc items=0 ppid=3030 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:27.487000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:27.491000 audit[4623]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:27.491000 audit[4623]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff44303910 a2=0 a3=0 items=0 ppid=3030 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:27.491000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:27.514000 audit: BPF prog-id=239 op=LOAD Dec 16 14:00:27.516000 audit: BPF prog-id=240 op=LOAD Dec 16 14:00:27.516000 audit[4602]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4591 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:27.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343832663730353432613266663432353631363230663266396237 Dec 16 14:00:27.516000 audit: BPF prog-id=240 op=UNLOAD Dec 16 14:00:27.516000 audit[4602]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4591 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:27.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343832663730353432613266663432353631363230663266396237 Dec 16 14:00:27.517000 audit: BPF prog-id=241 op=LOAD Dec 16 14:00:27.517000 audit[4602]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4591 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:27.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343832663730353432613266663432353631363230663266396237 Dec 16 14:00:27.517000 audit: BPF prog-id=242 op=LOAD Dec 16 14:00:27.517000 audit[4602]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4591 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:27.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343832663730353432613266663432353631363230663266396237 Dec 16 14:00:27.517000 audit: BPF prog-id=242 op=UNLOAD Dec 16 14:00:27.517000 audit[4602]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4591 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:27.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343832663730353432613266663432353631363230663266396237 Dec 16 14:00:27.517000 audit: BPF prog-id=241 op=UNLOAD Dec 16 14:00:27.517000 audit[4602]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4591 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:27.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343832663730353432613266663432353631363230663266396237 Dec 16 14:00:27.518000 audit: BPF prog-id=243 op=LOAD Dec 16 14:00:27.518000 audit[4602]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4591 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:27.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233343832663730353432613266663432353631363230663266396237 Dec 16 14:00:27.543000 audit[4628]: NETFILTER_CFG table=filter:131 family=2 entries=114 op=nft_register_chain pid=4628 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 14:00:27.543000 audit[4628]: SYSCALL arch=c000003e syscall=46 success=yes exit=63884 a0=3 a1=7ffdf1ae3530 a2=0 a3=7ffdf1ae351c items=0 ppid=4058 pid=4628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:27.543000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 14:00:27.577552 containerd[1603]: time="2025-12-16T14:00:27.577492386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fdrzj,Uid:61b00035-2995-44f7-ae62-3ec89692e439,Namespace:calico-system,Attempt:0,} returns sandbox id \"b3482f70542a2ff42561620f2f9b725631585226a0b140bd25c85fbfbeeb4c01\"" Dec 16 14:00:27.580208 containerd[1603]: time="2025-12-16T14:00:27.580173561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 14:00:27.625282 systemd-networkd[1502]: caliba1f2f5ce15: Gained IPv6LL Dec 16 14:00:27.758830 containerd[1603]: time="2025-12-16T14:00:27.758249184Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:27.759762 containerd[1603]: time="2025-12-16T14:00:27.759688187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 14:00:27.759953 containerd[1603]: time="2025-12-16T14:00:27.759729712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:27.760134 kubelet[2843]: E1216 14:00:27.760041 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 14:00:27.760265 kubelet[2843]: E1216 14:00:27.760122 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 14:00:27.760767 kubelet[2843]: E1216 14:00:27.760664 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbzd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fdrzj_calico-system(61b00035-2995-44f7-ae62-3ec89692e439): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:27.762020 kubelet[2843]: E1216 14:00:27.761937 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fdrzj" podUID="61b00035-2995-44f7-ae62-3ec89692e439" Dec 16 14:00:27.881079 systemd-networkd[1502]: cali463dc795227: Gained IPv6LL Dec 16 14:00:28.009040 systemd-networkd[1502]: vxlan.calico: Gained IPv6LL Dec 16 14:00:28.033270 containerd[1603]: time="2025-12-16T14:00:28.033217316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cd68bd7b-v9gp2,Uid:3ee33909-6767-4f65-befa-f64702fcbe38,Namespace:calico-apiserver,Attempt:0,}" Dec 16 14:00:28.034322 containerd[1603]: time="2025-12-16T14:00:28.034029734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k7tpf,Uid:4e270e1c-6e68-4a70-8c3e-e3b7e23d7ba4,Namespace:kube-system,Attempt:0,}" Dec 16 14:00:28.228646 systemd-networkd[1502]: cali82f9c19e921: Link UP Dec 16 14:00:28.231288 systemd-networkd[1502]: cali82f9c19e921: Gained carrier Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.119 [INFO][4635] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--v9gp2-eth0 calico-apiserver-66cd68bd7b- calico-apiserver 3ee33909-6767-4f65-befa-f64702fcbe38 835 0 2025-12-16 13:59:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66cd68bd7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal calico-apiserver-66cd68bd7b-v9gp2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali82f9c19e921 [] [] }} ContainerID="cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-v9gp2" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--v9gp2-" Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.119 [INFO][4635] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-v9gp2" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--v9gp2-eth0" Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.169 [INFO][4660] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" HandleID="k8s-pod-network.cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--v9gp2-eth0" Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.169 [INFO][4660] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" HandleID="k8s-pod-network.cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--v9gp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", "pod":"calico-apiserver-66cd68bd7b-v9gp2", "timestamp":"2025-12-16 14:00:28.169033312 +0000 UTC"}, Hostname:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.169 [INFO][4660] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.169 [INFO][4660] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.169 [INFO][4660] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal' Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.180 [INFO][4660] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.186 [INFO][4660] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.193 [INFO][4660] ipam/ipam.go 511: Trying affinity for 192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.195 [INFO][4660] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.199 [INFO][4660] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.199 [INFO][4660] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.201 [INFO][4660] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642 Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.207 [INFO][4660] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.215 [INFO][4660] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.7/26] block=192.168.23.0/26 handle="k8s-pod-network.cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.215 [INFO][4660] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.7/26] handle="k8s-pod-network.cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.216 [INFO][4660] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:00:28.251539 containerd[1603]: 2025-12-16 14:00:28.216 [INFO][4660] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.7/26] IPv6=[] ContainerID="cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" HandleID="k8s-pod-network.cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--v9gp2-eth0" Dec 16 14:00:28.254126 containerd[1603]: 2025-12-16 14:00:28.221 [INFO][4635] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-v9gp2" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--v9gp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--v9gp2-eth0", GenerateName:"calico-apiserver-66cd68bd7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ee33909-6767-4f65-befa-f64702fcbe38", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66cd68bd7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-66cd68bd7b-v9gp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82f9c19e921", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:28.254126 containerd[1603]: 2025-12-16 14:00:28.222 [INFO][4635] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.7/32] ContainerID="cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-v9gp2" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--v9gp2-eth0" Dec 16 14:00:28.254126 containerd[1603]: 2025-12-16 14:00:28.222 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82f9c19e921 ContainerID="cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-v9gp2" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--v9gp2-eth0" Dec 16 14:00:28.254126 containerd[1603]: 2025-12-16 14:00:28.234 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-v9gp2" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--v9gp2-eth0" Dec 16 14:00:28.254126 containerd[1603]: 2025-12-16 14:00:28.236 [INFO][4635] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-v9gp2" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--v9gp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--v9gp2-eth0", GenerateName:"calico-apiserver-66cd68bd7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ee33909-6767-4f65-befa-f64702fcbe38", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66cd68bd7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642", Pod:"calico-apiserver-66cd68bd7b-v9gp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82f9c19e921", MAC:"6a:fe:f5:a0:95:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:28.254126 containerd[1603]: 2025-12-16 14:00:28.246 [INFO][4635] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" Namespace="calico-apiserver" Pod="calico-apiserver-66cd68bd7b-v9gp2" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-calico--apiserver--66cd68bd7b--v9gp2-eth0" Dec 16 14:00:28.296000 audit[4681]: NETFILTER_CFG table=filter:132 family=2 entries=57 op=nft_register_chain pid=4681 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 14:00:28.303262 kernel: kauditd_printk_skb: 378 callbacks suppressed Dec 16 14:00:28.303716 kernel: audit: type=1325 audit(1765893628.296:692): table=filter:132 family=2 entries=57 op=nft_register_chain pid=4681 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 14:00:28.296000 audit[4681]: SYSCALL arch=c000003e syscall=46 success=yes exit=27828 a0=3 a1=7fffbff7d5a0 a2=0 a3=7fffbff7d58c items=0 ppid=4058 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.352856 kernel: audit: type=1300 audit(1765893628.296:692): arch=c000003e syscall=46 success=yes exit=27828 a0=3 a1=7fffbff7d5a0 a2=0 a3=7fffbff7d58c items=0 ppid=4058 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.353481 containerd[1603]: time="2025-12-16T14:00:28.353231139Z" level=info msg="connecting to shim cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642" address="unix:///run/containerd/s/c57ab1815639e285769f82021382e080768ce58b0cae4fa0b51e499f30ba6e2a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:00:28.296000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 14:00:28.369668 kubelet[2843]: E1216 14:00:28.369254 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fdrzj" podUID="61b00035-2995-44f7-ae62-3ec89692e439" Dec 16 14:00:28.375762 kernel: audit: type=1327 audit(1765893628.296:692): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 14:00:28.382477 kubelet[2843]: E1216 14:00:28.382358 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 14:00:28.458360 systemd-networkd[1502]: califd22505c796: Gained IPv6LL Dec 16 14:00:28.483070 systemd[1]: Started cri-containerd-cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642.scope - libcontainer container cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642. Dec 16 14:00:28.508000 audit: BPF prog-id=244 op=LOAD Dec 16 14:00:28.516780 kernel: audit: type=1334 audit(1765893628.508:693): prog-id=244 op=LOAD Dec 16 14:00:28.509000 audit: BPF prog-id=245 op=LOAD Dec 16 14:00:28.531775 kernel: audit: type=1334 audit(1765893628.509:694): prog-id=245 op=LOAD Dec 16 14:00:28.509000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.543812 systemd-networkd[1502]: cali9187a2379c8: Link UP Dec 16 14:00:28.553431 systemd-networkd[1502]: cali9187a2379c8: Gained carrier Dec 16 14:00:28.568803 kernel: audit: type=1300 audit(1765893628.509:694): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386366383232376466333864303737383765656631316265303738 Dec 16 14:00:28.603772 kernel: audit: type=1327 audit(1765893628.509:694): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386366383232376466333864303737383765656631316265303738 Dec 16 14:00:28.509000 audit: BPF prog-id=245 op=UNLOAD Dec 16 14:00:28.612798 kernel: audit: type=1334 audit(1765893628.509:695): prog-id=245 op=UNLOAD Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.118 [INFO][4645] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--k7tpf-eth0 coredns-668d6bf9bc- kube-system 4e270e1c-6e68-4a70-8c3e-e3b7e23d7ba4 830 0 2025-12-16 13:59:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal coredns-668d6bf9bc-k7tpf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9187a2379c8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7tpf" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--k7tpf-" Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.119 [INFO][4645] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7tpf" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--k7tpf-eth0" Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.173 [INFO][4658] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" HandleID="k8s-pod-network.ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--k7tpf-eth0" Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.174 [INFO][4658] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" HandleID="k8s-pod-network.ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--k7tpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000230ee0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", "pod":"coredns-668d6bf9bc-k7tpf", "timestamp":"2025-12-16 14:00:28.173521337 +0000 UTC"}, Hostname:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.174 [INFO][4658] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.216 [INFO][4658] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.217 [INFO][4658] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal' Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.285 [INFO][4658] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.373 [INFO][4658] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.424 [INFO][4658] ipam/ipam.go 511: Trying affinity for 192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.440 [INFO][4658] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.458 [INFO][4658] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.0/26 host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.458 [INFO][4658] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.23.0/26 handle="k8s-pod-network.ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.466 [INFO][4658] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9 Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.488 [INFO][4658] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.23.0/26 handle="k8s-pod-network.ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.520 [INFO][4658] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.23.8/26] block=192.168.23.0/26 handle="k8s-pod-network.ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.520 [INFO][4658] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.8/26] handle="k8s-pod-network.ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" host="ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal" Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.520 [INFO][4658] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 14:00:28.616385 containerd[1603]: 2025-12-16 14:00:28.520 [INFO][4658] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.23.8/26] IPv6=[] ContainerID="ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" HandleID="k8s-pod-network.ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" Workload="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--k7tpf-eth0" Dec 16 14:00:28.509000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.620527 containerd[1603]: 2025-12-16 14:00:28.532 [INFO][4645] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7tpf" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--k7tpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--k7tpf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e270e1c-6e68-4a70-8c3e-e3b7e23d7ba4", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 59, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-668d6bf9bc-k7tpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9187a2379c8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:28.620527 containerd[1603]: 2025-12-16 14:00:28.532 [INFO][4645] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.8/32] ContainerID="ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7tpf" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--k7tpf-eth0" Dec 16 14:00:28.620527 containerd[1603]: 2025-12-16 14:00:28.532 [INFO][4645] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9187a2379c8 ContainerID="ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7tpf" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--k7tpf-eth0" Dec 16 14:00:28.620527 containerd[1603]: 2025-12-16 14:00:28.550 [INFO][4645] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7tpf" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--k7tpf-eth0" Dec 16 14:00:28.620527 containerd[1603]: 2025-12-16 14:00:28.557 [INFO][4645] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7tpf" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--k7tpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--k7tpf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e270e1c-6e68-4a70-8c3e-e3b7e23d7ba4", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 59, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-77991eb294ac483d8194.c.flatcar-212911.internal", ContainerID:"ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9", Pod:"coredns-668d6bf9bc-k7tpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9187a2379c8", MAC:"c2:ad:d5:ab:d9:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 14:00:28.620527 containerd[1603]: 2025-12-16 14:00:28.610 [INFO][4645] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7tpf" WorkloadEndpoint="ci--4547--0--0--77991eb294ac483d8194.c.flatcar--212911.internal-k8s-coredns--668d6bf9bc--k7tpf-eth0" Dec 16 14:00:28.647765 kernel: audit: type=1300 audit(1765893628.509:695): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386366383232376466333864303737383765656631316265303738 Dec 16 14:00:28.678846 kernel: audit: type=1327 audit(1765893628.509:695): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386366383232376466333864303737383765656631316265303738 Dec 16 14:00:28.509000 audit: BPF prog-id=246 op=LOAD Dec 16 14:00:28.509000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386366383232376466333864303737383765656631316265303738 Dec 16 14:00:28.509000 audit: BPF prog-id=247 op=LOAD Dec 16 14:00:28.509000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386366383232376466333864303737383765656631316265303738 Dec 16 14:00:28.509000 audit: BPF prog-id=247 op=UNLOAD Dec 16 14:00:28.509000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386366383232376466333864303737383765656631316265303738 Dec 16 14:00:28.509000 audit: BPF prog-id=246 op=UNLOAD Dec 16 14:00:28.509000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386366383232376466333864303737383765656631316265303738 Dec 16 14:00:28.509000 audit: BPF prog-id=248 op=LOAD Dec 16 14:00:28.509000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4690 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366386366383232376466333864303737383765656631316265303738 Dec 16 14:00:28.615000 audit[4722]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=4722 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:28.615000 audit[4722]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffccd15cd40 a2=0 a3=7ffccd15cd2c items=0 ppid=3030 pid=4722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:28.647000 audit[4722]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=4722 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:28.647000 audit[4722]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffccd15cd40 a2=0 a3=0 items=0 ppid=3030 pid=4722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:28.706000 audit[4733]: NETFILTER_CFG table=filter:135 family=2 entries=17 op=nft_register_rule pid=4733 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:28.706000 audit[4733]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffee76c9ce0 a2=0 a3=7ffee76c9ccc items=0 ppid=3030 pid=4733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.706000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:28.710000 audit[4733]: NETFILTER_CFG table=nat:136 family=2 entries=35 op=nft_register_chain pid=4733 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:28.710000 audit[4733]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffee76c9ce0 a2=0 a3=7ffee76c9ccc items=0 ppid=3030 pid=4733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.710000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:28.717486 containerd[1603]: time="2025-12-16T14:00:28.717401624Z" level=info msg="connecting to shim ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9" address="unix:///run/containerd/s/3709521b32b840f43b7a95bc8d1cb2ac5cb5fb60d01d5fa34e7e5b2e1e023be6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 14:00:28.808774 containerd[1603]: time="2025-12-16T14:00:28.808418171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cd68bd7b-v9gp2,Uid:3ee33909-6767-4f65-befa-f64702fcbe38,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cf8cf8227df38d07787eef11be078c356bbb6f7c80f65c1d5ca681850cbc3642\"" Dec 16 14:00:28.814689 containerd[1603]: time="2025-12-16T14:00:28.814643183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 14:00:28.834000 audit[4758]: NETFILTER_CFG table=filter:137 family=2 entries=62 op=nft_register_chain pid=4758 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 14:00:28.834000 audit[4758]: SYSCALL arch=c000003e syscall=46 success=yes exit=27948 a0=3 a1=7ffcaea63080 a2=0 a3=7ffcaea6306c items=0 ppid=4058 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.834000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 14:00:28.852356 systemd[1]: Started cri-containerd-ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9.scope - libcontainer container ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9. Dec 16 14:00:28.892000 audit: BPF prog-id=249 op=LOAD Dec 16 14:00:28.895000 audit: BPF prog-id=250 op=LOAD Dec 16 14:00:28.895000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4739 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162363338396362376663383463356331643465383030313833366337 Dec 16 14:00:28.896000 audit: BPF prog-id=250 op=UNLOAD Dec 16 14:00:28.896000 audit[4750]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4739 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162363338396362376663383463356331643465383030313833366337 Dec 16 14:00:28.896000 audit: BPF prog-id=251 op=LOAD Dec 16 14:00:28.896000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4739 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162363338396362376663383463356331643465383030313833366337 Dec 16 14:00:28.897000 audit: BPF prog-id=252 op=LOAD Dec 16 14:00:28.897000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4739 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162363338396362376663383463356331643465383030313833366337 Dec 16 14:00:28.897000 audit: BPF prog-id=252 op=UNLOAD Dec 16 14:00:28.897000 audit[4750]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4739 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162363338396362376663383463356331643465383030313833366337 Dec 16 14:00:28.897000 audit: BPF prog-id=251 op=UNLOAD Dec 16 14:00:28.897000 audit[4750]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4739 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162363338396362376663383463356331643465383030313833366337 Dec 16 14:00:28.897000 audit: BPF prog-id=253 op=LOAD Dec 16 14:00:28.897000 audit[4750]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4739 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:28.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162363338396362376663383463356331643465383030313833366337 Dec 16 14:00:28.959299 containerd[1603]: time="2025-12-16T14:00:28.959147318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k7tpf,Uid:4e270e1c-6e68-4a70-8c3e-e3b7e23d7ba4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9\"" Dec 16 14:00:28.967707 containerd[1603]: time="2025-12-16T14:00:28.967571991Z" level=info msg="CreateContainer within sandbox \"ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 14:00:28.997594 containerd[1603]: time="2025-12-16T14:00:28.996326897Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:28.997787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2752254326.mount: Deactivated successfully. Dec 16 14:00:28.999145 containerd[1603]: time="2025-12-16T14:00:28.998662262Z" level=info msg="Container 8b35c96104ce7fbf7f1bc8af78b64dfc78ec04cc1363809711d3525ee119937c: CDI devices from CRI Config.CDIDevices: []" Dec 16 14:00:29.002866 containerd[1603]: time="2025-12-16T14:00:29.001724433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 14:00:29.007102 containerd[1603]: time="2025-12-16T14:00:29.007065431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:29.007429 kubelet[2843]: E1216 14:00:29.007381 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:00:29.007542 kubelet[2843]: E1216 14:00:29.007446 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:00:29.007682 kubelet[2843]: E1216 14:00:29.007611 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ftvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66cd68bd7b-v9gp2_calico-apiserver(3ee33909-6767-4f65-befa-f64702fcbe38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:29.009006 kubelet[2843]: E1216 14:00:29.008860 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-v9gp2" podUID="3ee33909-6767-4f65-befa-f64702fcbe38" Dec 16 14:00:29.019698 containerd[1603]: time="2025-12-16T14:00:29.019625684Z" level=info msg="CreateContainer within sandbox \"ab6389cb7fc84c5c1d4e8001836c7dfc8f3596c1a6da9646fd412504842e8cf9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b35c96104ce7fbf7f1bc8af78b64dfc78ec04cc1363809711d3525ee119937c\"" Dec 16 14:00:29.020519 containerd[1603]: time="2025-12-16T14:00:29.020475804Z" level=info msg="StartContainer for \"8b35c96104ce7fbf7f1bc8af78b64dfc78ec04cc1363809711d3525ee119937c\"" Dec 16 14:00:29.023603 containerd[1603]: time="2025-12-16T14:00:29.023537611Z" level=info msg="connecting to shim 8b35c96104ce7fbf7f1bc8af78b64dfc78ec04cc1363809711d3525ee119937c" address="unix:///run/containerd/s/3709521b32b840f43b7a95bc8d1cb2ac5cb5fb60d01d5fa34e7e5b2e1e023be6" protocol=ttrpc version=3 Dec 16 14:00:29.050032 systemd[1]: Started cri-containerd-8b35c96104ce7fbf7f1bc8af78b64dfc78ec04cc1363809711d3525ee119937c.scope - libcontainer container 8b35c96104ce7fbf7f1bc8af78b64dfc78ec04cc1363809711d3525ee119937c. Dec 16 14:00:29.072000 audit: BPF prog-id=254 op=LOAD Dec 16 14:00:29.074000 audit: BPF prog-id=255 op=LOAD Dec 16 14:00:29.074000 audit[4786]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4739 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:29.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862333563393631303463653766626637663162633861663738623634 Dec 16 14:00:29.074000 audit: BPF prog-id=255 op=UNLOAD Dec 16 14:00:29.074000 audit[4786]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4739 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:29.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862333563393631303463653766626637663162633861663738623634 Dec 16 14:00:29.074000 audit: BPF prog-id=256 op=LOAD Dec 16 14:00:29.074000 audit[4786]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4739 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:29.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862333563393631303463653766626637663162633861663738623634 Dec 16 14:00:29.074000 audit: BPF prog-id=257 op=LOAD Dec 16 14:00:29.074000 audit[4786]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4739 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:29.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862333563393631303463653766626637663162633861663738623634 Dec 16 14:00:29.074000 audit: BPF prog-id=257 op=UNLOAD Dec 16 14:00:29.074000 audit[4786]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4739 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:29.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862333563393631303463653766626637663162633861663738623634 Dec 16 14:00:29.074000 audit: BPF prog-id=256 op=UNLOAD Dec 16 14:00:29.074000 audit[4786]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4739 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:29.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862333563393631303463653766626637663162633861663738623634 Dec 16 14:00:29.074000 audit: BPF prog-id=258 op=LOAD Dec 16 14:00:29.074000 audit[4786]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4739 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:29.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862333563393631303463653766626637663162633861663738623634 Dec 16 14:00:29.107916 containerd[1603]: time="2025-12-16T14:00:29.107787513Z" level=info msg="StartContainer for \"8b35c96104ce7fbf7f1bc8af78b64dfc78ec04cc1363809711d3525ee119937c\" returns successfully" Dec 16 14:00:29.289016 systemd-networkd[1502]: cali82f9c19e921: Gained IPv6LL Dec 16 14:00:29.358324 kubelet[2843]: E1216 14:00:29.358167 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-v9gp2" podUID="3ee33909-6767-4f65-befa-f64702fcbe38" Dec 16 14:00:29.358324 kubelet[2843]: E1216 14:00:29.358274 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fdrzj" podUID="61b00035-2995-44f7-ae62-3ec89692e439" Dec 16 14:00:29.394772 kubelet[2843]: I1216 14:00:29.392877 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k7tpf" podStartSLOduration=52.392853895 podStartE2EDuration="52.392853895s" podCreationTimestamp="2025-12-16 13:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 14:00:29.374160501 +0000 UTC m=+59.541503783" watchObservedRunningTime="2025-12-16 14:00:29.392853895 +0000 UTC m=+59.560197184" Dec 16 14:00:29.746000 audit[4819]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=4819 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:29.746000 audit[4819]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff54858310 a2=0 a3=7fff548582fc items=0 ppid=3030 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:29.746000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:29.757000 audit[4819]: NETFILTER_CFG table=nat:139 family=2 entries=56 op=nft_register_chain pid=4819 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:00:29.757000 audit[4819]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff54858310 a2=0 a3=7fff548582fc items=0 ppid=3030 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:29.757000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:00:30.364406 kubelet[2843]: E1216 14:00:30.364326 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-v9gp2" podUID="3ee33909-6767-4f65-befa-f64702fcbe38" Dec 16 14:00:30.569365 systemd-networkd[1502]: cali9187a2379c8: Gained IPv6LL Dec 16 14:00:33.248480 ntpd[1554]: Listen normally on 7 vxlan.calico 192.168.23.0:123 Dec 16 14:00:33.249196 ntpd[1554]: 16 Dec 14:00:33 ntpd[1554]: Listen normally on 7 vxlan.calico 192.168.23.0:123 Dec 16 14:00:33.249196 ntpd[1554]: 16 Dec 14:00:33 ntpd[1554]: Listen normally on 8 cali973889bab2b [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 14:00:33.249196 ntpd[1554]: 16 Dec 14:00:33 ntpd[1554]: Listen normally on 9 cali176548b1225 [fe80::ecee:eeff:feee:eeee%5]:123 Dec 16 14:00:33.249196 ntpd[1554]: 16 Dec 14:00:33 ntpd[1554]: Listen normally on 10 calid52b3c9bcd4 [fe80::ecee:eeff:feee:eeee%6]:123 Dec 16 14:00:33.249196 ntpd[1554]: 16 Dec 14:00:33 ntpd[1554]: Listen normally on 11 vxlan.calico [fe80::64b4:deff:fe59:fc2%7]:123 Dec 16 14:00:33.249196 ntpd[1554]: 16 Dec 14:00:33 ntpd[1554]: Listen normally on 12 cali463dc795227 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 14:00:33.249196 ntpd[1554]: 16 Dec 14:00:33 ntpd[1554]: Listen normally on 13 caliba1f2f5ce15 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 14:00:33.249196 ntpd[1554]: 16 Dec 14:00:33 ntpd[1554]: Listen normally on 14 califd22505c796 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 14:00:33.249196 ntpd[1554]: 16 Dec 14:00:33 ntpd[1554]: Listen normally on 15 cali82f9c19e921 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 14:00:33.249196 ntpd[1554]: 16 Dec 14:00:33 ntpd[1554]: Listen normally on 16 cali9187a2379c8 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 14:00:33.248596 ntpd[1554]: Listen normally on 8 cali973889bab2b [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 14:00:33.248650 ntpd[1554]: Listen normally on 9 cali176548b1225 [fe80::ecee:eeff:feee:eeee%5]:123 Dec 16 14:00:33.248701 ntpd[1554]: Listen normally on 10 calid52b3c9bcd4 [fe80::ecee:eeff:feee:eeee%6]:123 Dec 16 14:00:33.248786 ntpd[1554]: Listen normally on 11 vxlan.calico [fe80::64b4:deff:fe59:fc2%7]:123 Dec 16 14:00:33.248840 ntpd[1554]: Listen normally on 12 cali463dc795227 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 14:00:33.248893 ntpd[1554]: Listen normally on 13 caliba1f2f5ce15 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 14:00:33.248940 ntpd[1554]: Listen normally on 14 califd22505c796 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 14:00:33.248987 ntpd[1554]: Listen normally on 15 cali82f9c19e921 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 14:00:33.249033 ntpd[1554]: Listen normally on 16 cali9187a2379c8 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 14:00:39.031447 containerd[1603]: time="2025-12-16T14:00:39.030938884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 14:00:39.192473 containerd[1603]: time="2025-12-16T14:00:39.192384885Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:39.194280 containerd[1603]: time="2025-12-16T14:00:39.194218181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 14:00:39.194551 containerd[1603]: time="2025-12-16T14:00:39.194323780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:39.194624 kubelet[2843]: E1216 14:00:39.194563 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:00:39.195938 kubelet[2843]: E1216 14:00:39.194632 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:00:39.195938 kubelet[2843]: E1216 14:00:39.194868 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a4506af38c3640db88d3886b67f3e843,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wgctf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b5bfd867b-v5ftq_calico-system(4e05f7b2-49a2-4ebc-8e80-5e6f910c574a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:39.198559 containerd[1603]: time="2025-12-16T14:00:39.198495579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 14:00:39.361287 containerd[1603]: time="2025-12-16T14:00:39.361105045Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:39.362836 containerd[1603]: time="2025-12-16T14:00:39.362705593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 14:00:39.363061 containerd[1603]: time="2025-12-16T14:00:39.362793007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:39.363280 kubelet[2843]: E1216 14:00:39.363193 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:00:39.363499 kubelet[2843]: E1216 14:00:39.363439 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:00:39.363695 kubelet[2843]: E1216 14:00:39.363630 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgctf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b5bfd867b-v5ftq_calico-system(4e05f7b2-49a2-4ebc-8e80-5e6f910c574a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:39.365027 kubelet[2843]: E1216 14:00:39.364943 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5bfd867b-v5ftq" podUID="4e05f7b2-49a2-4ebc-8e80-5e6f910c574a" Dec 16 14:00:41.031127 containerd[1603]: time="2025-12-16T14:00:41.030963588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 14:00:41.190509 containerd[1603]: time="2025-12-16T14:00:41.190426819Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:41.191912 containerd[1603]: time="2025-12-16T14:00:41.191855294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 14:00:41.192156 containerd[1603]: time="2025-12-16T14:00:41.191876412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:41.192225 kubelet[2843]: E1216 14:00:41.192171 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:00:41.192725 kubelet[2843]: E1216 14:00:41.192253 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:00:41.192725 kubelet[2843]: E1216 14:00:41.192579 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66cd68bd7b-ds2n7_calico-apiserver(d41be582-68e3-4041-abac-e335f6c6ba13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:41.194555 containerd[1603]: time="2025-12-16T14:00:41.194080627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 14:00:41.194675 kubelet[2843]: E1216 14:00:41.194457 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-ds2n7" podUID="d41be582-68e3-4041-abac-e335f6c6ba13" Dec 16 14:00:41.359210 containerd[1603]: time="2025-12-16T14:00:41.359034716Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:41.360610 containerd[1603]: time="2025-12-16T14:00:41.360474355Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 14:00:41.360610 containerd[1603]: time="2025-12-16T14:00:41.360491194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:41.360861 kubelet[2843]: E1216 14:00:41.360797 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 14:00:41.360945 kubelet[2843]: E1216 14:00:41.360859 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 14:00:41.361234 kubelet[2843]: E1216 14:00:41.361047 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2cq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d8d578c6b-htjdj_calico-system(1769a332-0974-4355-84f4-605660a8e93f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:41.362420 kubelet[2843]: E1216 14:00:41.362359 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d8d578c6b-htjdj" podUID="1769a332-0974-4355-84f4-605660a8e93f" Dec 16 14:00:42.031938 containerd[1603]: time="2025-12-16T14:00:42.031854407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 14:00:42.201897 containerd[1603]: time="2025-12-16T14:00:42.201822323Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:42.203523 containerd[1603]: time="2025-12-16T14:00:42.203453811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 14:00:42.203648 containerd[1603]: time="2025-12-16T14:00:42.203567283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:42.203833 kubelet[2843]: E1216 14:00:42.203787 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 14:00:42.205149 kubelet[2843]: E1216 14:00:42.203850 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 14:00:42.205149 kubelet[2843]: E1216 14:00:42.204635 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd72z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7s56z_calico-system(fff9659c-3470-45ea-9613-14efa791d03c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:42.205828 containerd[1603]: time="2025-12-16T14:00:42.204899774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 14:00:42.366962 containerd[1603]: time="2025-12-16T14:00:42.366793063Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:42.368508 containerd[1603]: time="2025-12-16T14:00:42.368444226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 14:00:42.368978 containerd[1603]: time="2025-12-16T14:00:42.368555281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:42.369181 kubelet[2843]: E1216 14:00:42.368802 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 14:00:42.369181 kubelet[2843]: E1216 14:00:42.368872 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 14:00:42.369864 kubelet[2843]: E1216 14:00:42.369761 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbzd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fdrzj_calico-system(61b00035-2995-44f7-ae62-3ec89692e439): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:42.370676 containerd[1603]: time="2025-12-16T14:00:42.370635225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 14:00:42.371242 kubelet[2843]: E1216 14:00:42.371181 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fdrzj" podUID="61b00035-2995-44f7-ae62-3ec89692e439" Dec 16 14:00:42.540512 containerd[1603]: time="2025-12-16T14:00:42.540446279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:42.542113 containerd[1603]: time="2025-12-16T14:00:42.541973845Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 14:00:42.542113 containerd[1603]: time="2025-12-16T14:00:42.542005225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:42.542510 kubelet[2843]: E1216 14:00:42.542429 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 14:00:42.542510 kubelet[2843]: E1216 14:00:42.542494 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 14:00:42.542941 kubelet[2843]: E1216 14:00:42.542688 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd72z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7s56z_calico-system(fff9659c-3470-45ea-9613-14efa791d03c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:42.543958 kubelet[2843]: E1216 14:00:42.543891 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 14:00:44.315791 kernel: kauditd_printk_skb: 80 callbacks suppressed Dec 16 14:00:44.316045 kernel: audit: type=1130 audit(1765893644.307:724): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.79:22-139.178.68.195:45596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:00:44.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.79:22-139.178.68.195:45596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:00:44.308072 systemd[1]: Started sshd@9-10.128.0.79:22-139.178.68.195:45596.service - OpenSSH per-connection server daemon (139.178.68.195:45596). Dec 16 14:00:44.619000 audit[4845]: USER_ACCT pid=4845 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:44.623065 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:00:44.624535 sshd[4845]: Accepted publickey for core from 139.178.68.195 port 45596 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:00:44.634562 systemd-logind[1569]: New session 11 of user core. Dec 16 14:00:44.650950 kernel: audit: type=1101 audit(1765893644.619:725): pid=4845 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:44.651061 kernel: audit: type=1103 audit(1765893644.620:726): pid=4845 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:44.620000 audit[4845]: CRED_ACQ pid=4845 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:44.684240 kernel: audit: type=1006 audit(1765893644.620:727): pid=4845 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 14:00:44.678811 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 14:00:44.620000 audit[4845]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce0725e60 a2=3 a3=0 items=0 ppid=1 pid=4845 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:44.693852 kernel: audit: type=1300 audit(1765893644.620:727): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce0725e60 a2=3 a3=0 items=0 ppid=1 pid=4845 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:44.620000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:00:44.734161 kernel: audit: type=1327 audit(1765893644.620:727): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:00:44.686000 audit[4845]: USER_START pid=4845 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:44.796899 kernel: audit: type=1105 audit(1765893644.686:728): pid=4845 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:44.797138 kernel: audit: type=1103 audit(1765893644.689:729): pid=4849 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:44.689000 audit[4849]: CRED_ACQ pid=4849 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:44.920717 sshd[4849]: Connection closed by 139.178.68.195 port 45596 Dec 16 14:00:44.921685 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Dec 16 14:00:44.923000 audit[4845]: USER_END pid=4845 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:44.928455 systemd[1]: sshd@9-10.128.0.79:22-139.178.68.195:45596.service: Deactivated successfully. Dec 16 14:00:44.933562 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 14:00:44.940170 systemd-logind[1569]: Session 11 logged out. Waiting for processes to exit. Dec 16 14:00:44.942039 systemd-logind[1569]: Removed session 11. Dec 16 14:00:44.924000 audit[4845]: CRED_DISP pid=4845 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:44.962787 kernel: audit: type=1106 audit(1765893644.923:730): pid=4845 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:44.962854 kernel: audit: type=1104 audit(1765893644.924:731): pid=4845 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:44.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.79:22-139.178.68.195:45596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:00:45.031150 containerd[1603]: time="2025-12-16T14:00:45.030916793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 14:00:45.208120 containerd[1603]: time="2025-12-16T14:00:45.207952109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:00:45.209673 containerd[1603]: time="2025-12-16T14:00:45.209594579Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 14:00:45.209921 containerd[1603]: time="2025-12-16T14:00:45.209706558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 14:00:45.210037 kubelet[2843]: E1216 14:00:45.209888 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:00:45.210037 kubelet[2843]: E1216 14:00:45.209948 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:00:45.210993 kubelet[2843]: E1216 14:00:45.210131 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ftvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66cd68bd7b-v9gp2_calico-apiserver(3ee33909-6767-4f65-befa-f64702fcbe38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 14:00:45.211858 kubelet[2843]: E1216 14:00:45.211794 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-v9gp2" podUID="3ee33909-6767-4f65-befa-f64702fcbe38" Dec 16 14:00:49.989652 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 14:00:49.989817 kernel: audit: type=1130 audit(1765893649.980:733): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.79:22-139.178.68.195:45608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:00:49.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.79:22-139.178.68.195:45608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:00:49.980567 systemd[1]: Started sshd@10-10.128.0.79:22-139.178.68.195:45608.service - OpenSSH per-connection server daemon (139.178.68.195:45608). Dec 16 14:00:50.299000 audit[4870]: USER_ACCT pid=4870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:50.307342 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:00:50.312500 sshd[4870]: Accepted publickey for core from 139.178.68.195 port 45608 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:00:50.328466 systemd-logind[1569]: New session 12 of user core. Dec 16 14:00:50.331796 kernel: audit: type=1101 audit(1765893650.299:734): pid=4870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:50.303000 audit[4870]: CRED_ACQ pid=4870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:50.334250 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 14:00:50.359835 kernel: audit: type=1103 audit(1765893650.303:735): pid=4870 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:50.304000 audit[4870]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd9d8ba70 a2=3 a3=0 items=0 ppid=1 pid=4870 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:50.408306 kernel: audit: type=1006 audit(1765893650.304:736): pid=4870 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 14:00:50.408419 kernel: audit: type=1300 audit(1765893650.304:736): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd9d8ba70 a2=3 a3=0 items=0 ppid=1 pid=4870 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:50.304000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:00:50.421777 kernel: audit: type=1327 audit(1765893650.304:736): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:00:50.353000 audit[4870]: USER_START pid=4870 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:50.460838 kernel: audit: type=1105 audit(1765893650.353:737): pid=4870 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:50.361000 audit[4874]: CRED_ACQ pid=4874 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:50.489397 kernel: audit: type=1103 audit(1765893650.361:738): pid=4874 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:50.615728 sshd[4874]: Connection closed by 139.178.68.195 port 45608 Dec 16 14:00:50.616982 sshd-session[4870]: pam_unix(sshd:session): session closed for user core Dec 16 14:00:50.622000 audit[4870]: USER_END pid=4870 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:50.632043 systemd[1]: sshd@10-10.128.0.79:22-139.178.68.195:45608.service: Deactivated successfully. Dec 16 14:00:50.633690 systemd-logind[1569]: Session 12 logged out. Waiting for processes to exit. Dec 16 14:00:50.637571 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 14:00:50.643171 systemd-logind[1569]: Removed session 12. Dec 16 14:00:50.660797 kernel: audit: type=1106 audit(1765893650.622:739): pid=4870 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:50.622000 audit[4870]: CRED_DISP pid=4870 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:50.687023 kernel: audit: type=1104 audit(1765893650.622:740): pid=4870 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:50.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.79:22-139.178.68.195:45608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:00:51.033079 kubelet[2843]: E1216 14:00:51.033013 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5bfd867b-v5ftq" podUID="4e05f7b2-49a2-4ebc-8e80-5e6f910c574a" Dec 16 14:00:55.032158 kubelet[2843]: E1216 14:00:55.032100 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-ds2n7" podUID="d41be582-68e3-4041-abac-e335f6c6ba13" Dec 16 14:00:55.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.79:22-139.178.68.195:52594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:00:55.669197 systemd[1]: Started sshd@11-10.128.0.79:22-139.178.68.195:52594.service - OpenSSH per-connection server daemon (139.178.68.195:52594). Dec 16 14:00:55.674904 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 14:00:55.675000 kernel: audit: type=1130 audit(1765893655.668:742): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.79:22-139.178.68.195:52594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:00:55.977000 audit[4915]: USER_ACCT pid=4915 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:55.986885 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:00:55.988511 sshd[4915]: Accepted publickey for core from 139.178.68.195 port 52594 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:00:56.008772 kernel: audit: type=1101 audit(1765893655.977:743): pid=4915 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:55.982000 audit[4915]: CRED_ACQ pid=4915 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:56.019295 systemd-logind[1569]: New session 13 of user core. Dec 16 14:00:56.036853 kernel: audit: type=1103 audit(1765893655.982:744): pid=4915 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:56.037049 kubelet[2843]: E1216 14:00:56.036992 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d8d578c6b-htjdj" podUID="1769a332-0974-4355-84f4-605660a8e93f" Dec 16 14:00:56.038409 kubelet[2843]: E1216 14:00:56.038260 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 14:00:56.056343 kernel: audit: type=1006 audit(1765893655.982:745): pid=4915 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 14:00:56.055149 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 14:00:55.982000 audit[4915]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaf66c910 a2=3 a3=0 items=0 ppid=1 pid=4915 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:56.088827 kernel: audit: type=1300 audit(1765893655.982:745): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaf66c910 a2=3 a3=0 items=0 ppid=1 pid=4915 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:55.982000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:00:56.099767 kernel: audit: type=1327 audit(1765893655.982:745): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:00:56.088000 audit[4915]: USER_START pid=4915 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:56.137867 kernel: audit: type=1105 audit(1765893656.088:746): pid=4915 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:56.112000 audit[4919]: CRED_ACQ pid=4919 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:56.164488 kernel: audit: type=1103 audit(1765893656.112:747): pid=4919 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:56.311868 sshd[4919]: Connection closed by 139.178.68.195 port 52594 Dec 16 14:00:56.312796 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Dec 16 14:00:56.315000 audit[4915]: USER_END pid=4915 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:56.323136 systemd-logind[1569]: Session 13 logged out. Waiting for processes to exit. Dec 16 14:00:56.324294 systemd[1]: sshd@11-10.128.0.79:22-139.178.68.195:52594.service: Deactivated successfully. Dec 16 14:00:56.330277 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 14:00:56.334842 systemd-logind[1569]: Removed session 13. Dec 16 14:00:56.353337 kernel: audit: type=1106 audit(1765893656.315:748): pid=4915 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:56.315000 audit[4915]: CRED_DISP pid=4915 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:56.383189 kernel: audit: type=1104 audit(1765893656.315:749): pid=4915 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:56.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.79:22-139.178.68.195:52594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:00:56.393123 systemd[1]: Started sshd@12-10.128.0.79:22-139.178.68.195:52596.service - OpenSSH per-connection server daemon (139.178.68.195:52596). Dec 16 14:00:56.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.79:22-139.178.68.195:52596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:00:56.672000 audit[4932]: USER_ACCT pid=4932 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:56.675105 sshd[4932]: Accepted publickey for core from 139.178.68.195 port 52596 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:00:56.674000 audit[4932]: CRED_ACQ pid=4932 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:56.674000 audit[4932]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd4617120 a2=3 a3=0 items=0 ppid=1 pid=4932 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:56.674000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:00:56.677721 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:00:56.688482 systemd-logind[1569]: New session 14 of user core. Dec 16 14:00:56.695011 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 14:00:56.700000 audit[4932]: USER_START pid=4932 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:56.702000 audit[4938]: CRED_ACQ pid=4938 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:56.986013 sshd[4938]: Connection closed by 139.178.68.195 port 52596 Dec 16 14:00:56.989289 sshd-session[4932]: pam_unix(sshd:session): session closed for user core Dec 16 14:00:56.993000 audit[4932]: USER_END pid=4932 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:56.993000 audit[4932]: CRED_DISP pid=4932 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:57.001453 systemd[1]: sshd@12-10.128.0.79:22-139.178.68.195:52596.service: Deactivated successfully. Dec 16 14:00:57.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.79:22-139.178.68.195:52596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:00:57.007047 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 14:00:57.010893 systemd-logind[1569]: Session 14 logged out. Waiting for processes to exit. Dec 16 14:00:57.017572 systemd-logind[1569]: Removed session 14. Dec 16 14:00:57.046449 systemd[1]: Started sshd@13-10.128.0.79:22-139.178.68.195:52610.service - OpenSSH per-connection server daemon (139.178.68.195:52610). Dec 16 14:00:57.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.79:22-139.178.68.195:52610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:00:57.364000 audit[4950]: USER_ACCT pid=4950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:57.367191 sshd[4950]: Accepted publickey for core from 139.178.68.195 port 52610 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:00:57.369000 audit[4950]: CRED_ACQ pid=4950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:57.369000 audit[4950]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca8408840 a2=3 a3=0 items=0 ppid=1 pid=4950 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:00:57.369000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:00:57.374077 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:00:57.391381 systemd-logind[1569]: New session 15 of user core. Dec 16 14:00:57.396680 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 14:00:57.407000 audit[4950]: USER_START pid=4950 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:57.411000 audit[4955]: CRED_ACQ pid=4955 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:57.670794 sshd[4955]: Connection closed by 139.178.68.195 port 52610 Dec 16 14:00:57.672159 sshd-session[4950]: pam_unix(sshd:session): session closed for user core Dec 16 14:00:57.674000 audit[4950]: USER_END pid=4950 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:57.674000 audit[4950]: CRED_DISP pid=4950 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:00:57.681866 systemd[1]: sshd@13-10.128.0.79:22-139.178.68.195:52610.service: Deactivated successfully. Dec 16 14:00:57.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.79:22-139.178.68.195:52610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:00:57.686493 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 14:00:57.689815 systemd-logind[1569]: Session 15 logged out. Waiting for processes to exit. Dec 16 14:00:57.695383 systemd-logind[1569]: Removed session 15. Dec 16 14:00:58.034822 kubelet[2843]: E1216 14:00:58.034298 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fdrzj" podUID="61b00035-2995-44f7-ae62-3ec89692e439" Dec 16 14:01:00.033875 kubelet[2843]: E1216 14:01:00.033807 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-v9gp2" podUID="3ee33909-6767-4f65-befa-f64702fcbe38" Dec 16 14:01:02.731688 systemd[1]: Started sshd@14-10.128.0.79:22-139.178.68.195:50932.service - OpenSSH per-connection server daemon (139.178.68.195:50932). Dec 16 14:01:02.743948 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 14:01:02.744072 kernel: audit: type=1130 audit(1765893662.731:769): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.79:22-139.178.68.195:50932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:02.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.79:22-139.178.68.195:50932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:03.040000 audit[4969]: USER_ACCT pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:03.044552 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:01:03.048493 sshd[4969]: Accepted publickey for core from 139.178.68.195 port 50932 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:01:03.060114 systemd-logind[1569]: New session 16 of user core. Dec 16 14:01:03.072864 kernel: audit: type=1101 audit(1765893663.040:770): pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:03.072997 kernel: audit: type=1103 audit(1765893663.042:771): pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:03.042000 audit[4969]: CRED_ACQ pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:03.098919 kernel: audit: type=1006 audit(1765893663.042:772): pid=4969 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 14:01:03.042000 audit[4969]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2a39de20 a2=3 a3=0 items=0 ppid=1 pid=4969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:03.042000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:03.146182 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 14:01:03.155951 kernel: audit: type=1300 audit(1765893663.042:772): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2a39de20 a2=3 a3=0 items=0 ppid=1 pid=4969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:03.156034 kernel: audit: type=1327 audit(1765893663.042:772): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:03.157000 audit[4969]: USER_START pid=4969 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:03.196795 kernel: audit: type=1105 audit(1765893663.157:773): pid=4969 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:03.196935 kernel: audit: type=1103 audit(1765893663.157:774): pid=4973 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:03.157000 audit[4973]: CRED_ACQ pid=4973 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:03.352041 sshd[4973]: Connection closed by 139.178.68.195 port 50932 Dec 16 14:01:03.354888 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Dec 16 14:01:03.359000 audit[4969]: USER_END pid=4969 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:03.365204 systemd[1]: sshd@14-10.128.0.79:22-139.178.68.195:50932.service: Deactivated successfully. Dec 16 14:01:03.370399 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 14:01:03.379671 systemd-logind[1569]: Session 16 logged out. Waiting for processes to exit. Dec 16 14:01:03.381823 systemd-logind[1569]: Removed session 16. Dec 16 14:01:03.406276 kernel: audit: type=1106 audit(1765893663.359:775): pid=4969 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:03.406412 kernel: audit: type=1104 audit(1765893663.359:776): pid=4969 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:03.359000 audit[4969]: CRED_DISP pid=4969 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:03.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.79:22-139.178.68.195:50932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:05.031789 containerd[1603]: time="2025-12-16T14:01:05.031708981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 14:01:05.200781 containerd[1603]: time="2025-12-16T14:01:05.200562142Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:01:05.203132 containerd[1603]: time="2025-12-16T14:01:05.202983159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 14:01:05.203132 containerd[1603]: time="2025-12-16T14:01:05.203052487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 14:01:05.203948 kubelet[2843]: E1216 14:01:05.203879 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:01:05.204612 kubelet[2843]: E1216 14:01:05.203979 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:01:05.204612 kubelet[2843]: E1216 14:01:05.204141 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a4506af38c3640db88d3886b67f3e843,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wgctf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b5bfd867b-v5ftq_calico-system(4e05f7b2-49a2-4ebc-8e80-5e6f910c574a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 14:01:05.207337 containerd[1603]: time="2025-12-16T14:01:05.207244764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 14:01:05.373351 containerd[1603]: time="2025-12-16T14:01:05.373180059Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:01:05.374816 containerd[1603]: time="2025-12-16T14:01:05.374726563Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 14:01:05.375035 containerd[1603]: time="2025-12-16T14:01:05.374997564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 14:01:05.375160 kubelet[2843]: E1216 14:01:05.375070 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:01:05.375160 kubelet[2843]: E1216 14:01:05.375130 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:01:05.375351 kubelet[2843]: E1216 14:01:05.375291 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgctf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b5bfd867b-v5ftq_calico-system(4e05f7b2-49a2-4ebc-8e80-5e6f910c574a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 14:01:05.376802 kubelet[2843]: E1216 14:01:05.376716 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5bfd867b-v5ftq" podUID="4e05f7b2-49a2-4ebc-8e80-5e6f910c574a" Dec 16 14:01:07.031559 containerd[1603]: time="2025-12-16T14:01:07.031495603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 14:01:07.244122 containerd[1603]: time="2025-12-16T14:01:07.244041231Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:01:07.246057 containerd[1603]: time="2025-12-16T14:01:07.245989970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 14:01:07.246885 containerd[1603]: time="2025-12-16T14:01:07.246038151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 14:01:07.246980 kubelet[2843]: E1216 14:01:07.246300 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 14:01:07.246980 kubelet[2843]: E1216 14:01:07.246361 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 14:01:07.246980 kubelet[2843]: E1216 14:01:07.246567 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2cq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d8d578c6b-htjdj_calico-system(1769a332-0974-4355-84f4-605660a8e93f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 14:01:07.248472 kubelet[2843]: E1216 14:01:07.248018 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d8d578c6b-htjdj" podUID="1769a332-0974-4355-84f4-605660a8e93f" Dec 16 14:01:08.032953 containerd[1603]: time="2025-12-16T14:01:08.032474419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 14:01:08.198409 containerd[1603]: time="2025-12-16T14:01:08.198110322Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:01:08.201767 containerd[1603]: time="2025-12-16T14:01:08.201641787Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 14:01:08.202104 containerd[1603]: time="2025-12-16T14:01:08.202002293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 14:01:08.202343 kubelet[2843]: E1216 14:01:08.202248 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:01:08.202343 kubelet[2843]: E1216 14:01:08.202312 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:01:08.202579 kubelet[2843]: E1216 14:01:08.202494 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hw9wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66cd68bd7b-ds2n7_calico-apiserver(d41be582-68e3-4041-abac-e335f6c6ba13): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 14:01:08.204273 kubelet[2843]: E1216 14:01:08.204219 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-ds2n7" podUID="d41be582-68e3-4041-abac-e335f6c6ba13" Dec 16 14:01:08.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.79:22-139.178.68.195:50942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:08.414868 systemd[1]: Started sshd@15-10.128.0.79:22-139.178.68.195:50942.service - OpenSSH per-connection server daemon (139.178.68.195:50942). Dec 16 14:01:08.420529 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 14:01:08.420647 kernel: audit: type=1130 audit(1765893668.414:778): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.79:22-139.178.68.195:50942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:08.715000 audit[4997]: USER_ACCT pid=4997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:08.716290 sshd[4997]: Accepted publickey for core from 139.178.68.195 port 50942 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:01:08.719604 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:01:08.734965 systemd-logind[1569]: New session 17 of user core. Dec 16 14:01:08.752526 kernel: audit: type=1101 audit(1765893668.715:779): pid=4997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:08.752640 kernel: audit: type=1103 audit(1765893668.715:780): pid=4997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:08.715000 audit[4997]: CRED_ACQ pid=4997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:08.715000 audit[4997]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd25a7780 a2=3 a3=0 items=0 ppid=1 pid=4997 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:08.819849 kernel: audit: type=1006 audit(1765893668.715:781): pid=4997 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 14:01:08.819941 kernel: audit: type=1300 audit(1765893668.715:781): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd25a7780 a2=3 a3=0 items=0 ppid=1 pid=4997 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:08.820024 kernel: audit: type=1327 audit(1765893668.715:781): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:08.715000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:08.830994 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 14:01:08.835000 audit[4997]: USER_START pid=4997 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:08.839000 audit[5001]: CRED_ACQ pid=5001 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:08.897460 kernel: audit: type=1105 audit(1765893668.835:782): pid=4997 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:08.897555 kernel: audit: type=1103 audit(1765893668.839:783): pid=5001 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:09.025566 sshd[5001]: Connection closed by 139.178.68.195 port 50942 Dec 16 14:01:09.027085 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Dec 16 14:01:09.028000 audit[4997]: USER_END pid=4997 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:09.035238 systemd[1]: sshd@15-10.128.0.79:22-139.178.68.195:50942.service: Deactivated successfully. Dec 16 14:01:09.038781 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 14:01:09.043366 systemd-logind[1569]: Session 17 logged out. Waiting for processes to exit. Dec 16 14:01:09.045374 systemd-logind[1569]: Removed session 17. Dec 16 14:01:09.065784 kernel: audit: type=1106 audit(1765893669.028:784): pid=4997 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:09.029000 audit[4997]: CRED_DISP pid=4997 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:09.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.79:22-139.178.68.195:50942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:09.091787 kernel: audit: type=1104 audit(1765893669.029:785): pid=4997 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:10.032786 containerd[1603]: time="2025-12-16T14:01:10.032563240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 14:01:10.196066 containerd[1603]: time="2025-12-16T14:01:10.196009615Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:01:10.197762 containerd[1603]: time="2025-12-16T14:01:10.197692003Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 14:01:10.197943 containerd[1603]: time="2025-12-16T14:01:10.197808363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 14:01:10.198091 kubelet[2843]: E1216 14:01:10.198028 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 14:01:10.198553 kubelet[2843]: E1216 14:01:10.198106 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 14:01:10.198911 kubelet[2843]: E1216 14:01:10.198821 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbzd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fdrzj_calico-system(61b00035-2995-44f7-ae62-3ec89692e439): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 14:01:10.200116 kubelet[2843]: E1216 14:01:10.200078 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fdrzj" podUID="61b00035-2995-44f7-ae62-3ec89692e439" Dec 16 14:01:11.031525 containerd[1603]: time="2025-12-16T14:01:11.031464253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 14:01:11.199525 containerd[1603]: time="2025-12-16T14:01:11.199443993Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:01:11.201068 containerd[1603]: time="2025-12-16T14:01:11.201007952Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 14:01:11.201300 containerd[1603]: time="2025-12-16T14:01:11.201037853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 14:01:11.201374 kubelet[2843]: E1216 14:01:11.201329 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 14:01:11.201827 kubelet[2843]: E1216 14:01:11.201413 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 14:01:11.201827 kubelet[2843]: E1216 14:01:11.201651 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd72z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7s56z_calico-system(fff9659c-3470-45ea-9613-14efa791d03c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 14:01:11.205284 containerd[1603]: time="2025-12-16T14:01:11.205234355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 14:01:11.364243 containerd[1603]: time="2025-12-16T14:01:11.363507988Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:01:11.366068 containerd[1603]: time="2025-12-16T14:01:11.365919900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 14:01:11.366068 containerd[1603]: time="2025-12-16T14:01:11.365969374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 14:01:11.366303 kubelet[2843]: E1216 14:01:11.366254 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 14:01:11.366379 kubelet[2843]: E1216 14:01:11.366321 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 14:01:11.366571 kubelet[2843]: E1216 14:01:11.366507 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd72z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7s56z_calico-system(fff9659c-3470-45ea-9613-14efa791d03c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 14:01:11.368080 kubelet[2843]: E1216 14:01:11.368013 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 14:01:15.030771 containerd[1603]: time="2025-12-16T14:01:15.030537366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 14:01:15.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.79:22-139.178.68.195:51118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:15.150897 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 14:01:15.150957 kernel: audit: type=1130 audit(1765893675.142:787): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.79:22-139.178.68.195:51118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:15.142955 systemd[1]: Started sshd@16-10.128.0.79:22-139.178.68.195:51118.service - OpenSSH per-connection server daemon (139.178.68.195:51118). Dec 16 14:01:15.213178 containerd[1603]: time="2025-12-16T14:01:15.213077929Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:01:15.215122 containerd[1603]: time="2025-12-16T14:01:15.214890832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 14:01:15.215122 containerd[1603]: time="2025-12-16T14:01:15.214942192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 14:01:15.215773 kubelet[2843]: E1216 14:01:15.215417 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:01:15.215773 kubelet[2843]: E1216 14:01:15.215494 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 14:01:15.216377 kubelet[2843]: E1216 14:01:15.215697 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ftvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66cd68bd7b-v9gp2_calico-apiserver(3ee33909-6767-4f65-befa-f64702fcbe38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 14:01:15.217607 kubelet[2843]: E1216 14:01:15.217509 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-v9gp2" podUID="3ee33909-6767-4f65-befa-f64702fcbe38" Dec 16 14:01:15.457000 audit[5017]: USER_ACCT pid=5017 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:15.461637 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:01:15.463668 sshd[5017]: Accepted publickey for core from 139.178.68.195 port 51118 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:01:15.473197 systemd-logind[1569]: New session 18 of user core. Dec 16 14:01:15.489860 kernel: audit: type=1101 audit(1765893675.457:788): pid=5017 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:15.489987 kernel: audit: type=1103 audit(1765893675.457:789): pid=5017 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:15.457000 audit[5017]: CRED_ACQ pid=5017 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:15.519807 kernel: audit: type=1006 audit(1765893675.457:790): pid=5017 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 14:01:15.535232 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 14:01:15.457000 audit[5017]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddce36cf0 a2=3 a3=0 items=0 ppid=1 pid=5017 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:15.576938 kernel: audit: type=1300 audit(1765893675.457:790): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddce36cf0 a2=3 a3=0 items=0 ppid=1 pid=5017 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:15.577088 kernel: audit: type=1327 audit(1765893675.457:790): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:15.457000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:15.545000 audit[5017]: USER_START pid=5017 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:15.616160 kernel: audit: type=1105 audit(1765893675.545:791): pid=5017 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:15.550000 audit[5021]: CRED_ACQ pid=5021 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:15.642787 kernel: audit: type=1103 audit(1765893675.550:792): pid=5021 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:15.771211 sshd[5021]: Connection closed by 139.178.68.195 port 51118 Dec 16 14:01:15.772083 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Dec 16 14:01:15.773000 audit[5017]: USER_END pid=5017 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:15.780629 systemd[1]: sshd@16-10.128.0.79:22-139.178.68.195:51118.service: Deactivated successfully. Dec 16 14:01:15.786536 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 14:01:15.796904 systemd-logind[1569]: Session 18 logged out. Waiting for processes to exit. Dec 16 14:01:15.799127 systemd-logind[1569]: Removed session 18. Dec 16 14:01:15.811794 kernel: audit: type=1106 audit(1765893675.773:793): pid=5017 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:15.774000 audit[5017]: CRED_DISP pid=5017 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:15.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.79:22-139.178.68.195:51118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:15.837796 kernel: audit: type=1104 audit(1765893675.774:794): pid=5017 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:17.031502 kubelet[2843]: E1216 14:01:17.031420 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5bfd867b-v5ftq" podUID="4e05f7b2-49a2-4ebc-8e80-5e6f910c574a" Dec 16 14:01:20.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.79:22-139.178.68.195:51294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:20.824054 systemd[1]: Started sshd@17-10.128.0.79:22-139.178.68.195:51294.service - OpenSSH per-connection server daemon (139.178.68.195:51294). Dec 16 14:01:20.829603 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 14:01:20.829715 kernel: audit: type=1130 audit(1765893680.823:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.79:22-139.178.68.195:51294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:21.134000 audit[5033]: USER_ACCT pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:21.136147 sshd[5033]: Accepted publickey for core from 139.178.68.195 port 51294 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:01:21.140527 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:01:21.157059 systemd-logind[1569]: New session 19 of user core. Dec 16 14:01:21.134000 audit[5033]: CRED_ACQ pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:21.192962 kernel: audit: type=1101 audit(1765893681.134:797): pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:21.193812 kernel: audit: type=1103 audit(1765893681.134:798): pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:21.193889 kernel: audit: type=1006 audit(1765893681.134:799): pid=5033 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 14:01:21.195501 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 14:01:21.134000 audit[5033]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd826e76b0 a2=3 a3=0 items=0 ppid=1 pid=5033 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:21.239441 kernel: audit: type=1300 audit(1765893681.134:799): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd826e76b0 a2=3 a3=0 items=0 ppid=1 pid=5033 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:21.241397 kernel: audit: type=1327 audit(1765893681.134:799): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:21.134000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:21.213000 audit[5033]: USER_START pid=5033 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:21.286525 kernel: audit: type=1105 audit(1765893681.213:800): pid=5033 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:21.286665 kernel: audit: type=1103 audit(1765893681.218:801): pid=5037 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:21.218000 audit[5037]: CRED_ACQ pid=5037 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:21.419637 sshd[5037]: Connection closed by 139.178.68.195 port 51294 Dec 16 14:01:21.421083 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Dec 16 14:01:21.422000 audit[5033]: USER_END pid=5033 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:21.427860 systemd-logind[1569]: Session 19 logged out. Waiting for processes to exit. Dec 16 14:01:21.429075 systemd[1]: sshd@17-10.128.0.79:22-139.178.68.195:51294.service: Deactivated successfully. Dec 16 14:01:21.434130 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 14:01:21.439710 systemd-logind[1569]: Removed session 19. Dec 16 14:01:21.422000 audit[5033]: CRED_DISP pid=5033 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:21.485156 kernel: audit: type=1106 audit(1765893681.422:802): pid=5033 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:21.485251 kernel: audit: type=1104 audit(1765893681.422:803): pid=5033 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:21.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.79:22-139.178.68.195:51294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:21.496847 systemd[1]: Started sshd@18-10.128.0.79:22-139.178.68.195:51296.service - OpenSSH per-connection server daemon (139.178.68.195:51296). Dec 16 14:01:21.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.79:22-139.178.68.195:51296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:21.771000 audit[5049]: USER_ACCT pid=5049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:21.772121 sshd[5049]: Accepted publickey for core from 139.178.68.195 port 51296 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:01:21.772000 audit[5049]: CRED_ACQ pid=5049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:21.772000 audit[5049]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff08cc6100 a2=3 a3=0 items=0 ppid=1 pid=5049 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:21.772000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:21.774544 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:01:21.786782 systemd-logind[1569]: New session 20 of user core. Dec 16 14:01:21.795040 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 14:01:21.800000 audit[5049]: USER_START pid=5049 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:21.803000 audit[5053]: CRED_ACQ pid=5053 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:22.034637 kubelet[2843]: E1216 14:01:22.034485 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d8d578c6b-htjdj" podUID="1769a332-0974-4355-84f4-605660a8e93f" Dec 16 14:01:22.090053 sshd[5053]: Connection closed by 139.178.68.195 port 51296 Dec 16 14:01:22.090929 sshd-session[5049]: pam_unix(sshd:session): session closed for user core Dec 16 14:01:22.092000 audit[5049]: USER_END pid=5049 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:22.092000 audit[5049]: CRED_DISP pid=5049 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:22.096621 systemd[1]: sshd@18-10.128.0.79:22-139.178.68.195:51296.service: Deactivated successfully. Dec 16 14:01:22.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.79:22-139.178.68.195:51296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:22.100198 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 14:01:22.104629 systemd-logind[1569]: Session 20 logged out. Waiting for processes to exit. Dec 16 14:01:22.106158 systemd-logind[1569]: Removed session 20. Dec 16 14:01:22.144616 systemd[1]: Started sshd@19-10.128.0.79:22-139.178.68.195:51300.service - OpenSSH per-connection server daemon (139.178.68.195:51300). Dec 16 14:01:22.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.79:22-139.178.68.195:51300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:22.419000 audit[5063]: USER_ACCT pid=5063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:22.420468 sshd[5063]: Accepted publickey for core from 139.178.68.195 port 51300 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:01:22.421000 audit[5063]: CRED_ACQ pid=5063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:22.421000 audit[5063]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff4e23ec0 a2=3 a3=0 items=0 ppid=1 pid=5063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:22.421000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:22.423177 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:01:22.430889 systemd-logind[1569]: New session 21 of user core. Dec 16 14:01:22.437037 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 14:01:22.442000 audit[5063]: USER_START pid=5063 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:22.444000 audit[5067]: CRED_ACQ pid=5067 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:23.031168 kubelet[2843]: E1216 14:01:23.031101 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-ds2n7" podUID="d41be582-68e3-4041-abac-e335f6c6ba13" Dec 16 14:01:23.681000 audit[5077]: NETFILTER_CFG table=filter:140 family=2 entries=26 op=nft_register_rule pid=5077 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:01:23.681000 audit[5077]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff61afef10 a2=0 a3=7fff61afeefc items=0 ppid=3030 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:23.681000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:01:23.688000 audit[5077]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5077 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:01:23.688000 audit[5077]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff61afef10 a2=0 a3=0 items=0 ppid=3030 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:23.688000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:01:23.697658 sshd[5067]: Connection closed by 139.178.68.195 port 51300 Dec 16 14:01:23.698827 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Dec 16 14:01:23.700000 audit[5063]: USER_END pid=5063 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:23.700000 audit[5063]: CRED_DISP pid=5063 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:23.709343 systemd[1]: sshd@19-10.128.0.79:22-139.178.68.195:51300.service: Deactivated successfully. Dec 16 14:01:23.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.79:22-139.178.68.195:51300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:23.715793 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 14:01:23.720296 systemd-logind[1569]: Session 21 logged out. Waiting for processes to exit. Dec 16 14:01:23.724348 systemd-logind[1569]: Removed session 21. Dec 16 14:01:23.740000 audit[5082]: NETFILTER_CFG table=filter:142 family=2 entries=38 op=nft_register_rule pid=5082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:01:23.740000 audit[5082]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd96680db0 a2=0 a3=7ffd96680d9c items=0 ppid=3030 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:23.740000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:01:23.747000 audit[5082]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:01:23.747000 audit[5082]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd96680db0 a2=0 a3=0 items=0 ppid=3030 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:23.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:01:23.762030 systemd[1]: Started sshd@20-10.128.0.79:22-139.178.68.195:51310.service - OpenSSH per-connection server daemon (139.178.68.195:51310). Dec 16 14:01:23.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.79:22-139.178.68.195:51310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:24.065000 audit[5084]: USER_ACCT pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:24.066917 sshd[5084]: Accepted publickey for core from 139.178.68.195 port 51310 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:01:24.067000 audit[5084]: CRED_ACQ pid=5084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:24.067000 audit[5084]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffede2c2880 a2=3 a3=0 items=0 ppid=1 pid=5084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:24.067000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:24.069268 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:01:24.076920 systemd-logind[1569]: New session 22 of user core. Dec 16 14:01:24.085056 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 14:01:24.091000 audit[5084]: USER_START pid=5084 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:24.095000 audit[5088]: CRED_ACQ pid=5088 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:24.478506 sshd[5088]: Connection closed by 139.178.68.195 port 51310 Dec 16 14:01:24.478301 sshd-session[5084]: pam_unix(sshd:session): session closed for user core Dec 16 14:01:24.485000 audit[5084]: USER_END pid=5084 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:24.486000 audit[5084]: CRED_DISP pid=5084 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:24.490926 systemd[1]: sshd@20-10.128.0.79:22-139.178.68.195:51310.service: Deactivated successfully. Dec 16 14:01:24.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.79:22-139.178.68.195:51310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:24.494993 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 14:01:24.497413 systemd-logind[1569]: Session 22 logged out. Waiting for processes to exit. Dec 16 14:01:24.499659 systemd-logind[1569]: Removed session 22. Dec 16 14:01:24.534348 systemd[1]: Started sshd@21-10.128.0.79:22-139.178.68.195:51316.service - OpenSSH per-connection server daemon (139.178.68.195:51316). Dec 16 14:01:24.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.79:22-139.178.68.195:51316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:24.807000 audit[5124]: USER_ACCT pid=5124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:24.808372 sshd[5124]: Accepted publickey for core from 139.178.68.195 port 51316 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:01:24.808000 audit[5124]: CRED_ACQ pid=5124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:24.808000 audit[5124]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd107c95a0 a2=3 a3=0 items=0 ppid=1 pid=5124 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:24.808000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:24.810605 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:01:24.818819 systemd-logind[1569]: New session 23 of user core. Dec 16 14:01:24.824026 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 14:01:24.828000 audit[5124]: USER_START pid=5124 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:24.831000 audit[5128]: CRED_ACQ pid=5128 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:25.017619 sshd[5128]: Connection closed by 139.178.68.195 port 51316 Dec 16 14:01:25.018504 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Dec 16 14:01:25.020000 audit[5124]: USER_END pid=5124 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:25.020000 audit[5124]: CRED_DISP pid=5124 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:25.024937 systemd[1]: sshd@21-10.128.0.79:22-139.178.68.195:51316.service: Deactivated successfully. Dec 16 14:01:25.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.79:22-139.178.68.195:51316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:25.031320 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 14:01:25.032106 kubelet[2843]: E1216 14:01:25.032054 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fdrzj" podUID="61b00035-2995-44f7-ae62-3ec89692e439" Dec 16 14:01:25.036535 systemd-logind[1569]: Session 23 logged out. Waiting for processes to exit. Dec 16 14:01:25.040307 systemd-logind[1569]: Removed session 23. Dec 16 14:01:26.032836 kubelet[2843]: E1216 14:01:26.032715 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 14:01:30.032645 kubelet[2843]: E1216 14:01:30.032582 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-v9gp2" podUID="3ee33909-6767-4f65-befa-f64702fcbe38" Dec 16 14:01:30.081880 systemd[1]: Started sshd@22-10.128.0.79:22-139.178.68.195:51330.service - OpenSSH per-connection server daemon (139.178.68.195:51330). Dec 16 14:01:30.091828 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 14:01:30.092002 kernel: audit: type=1130 audit(1765893690.082:845): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.79:22-139.178.68.195:51330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:30.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.79:22-139.178.68.195:51330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:30.293000 audit[5146]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:01:30.293000 audit[5146]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc6d2d63a0 a2=0 a3=7ffc6d2d638c items=0 ppid=3030 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:30.345518 kernel: audit: type=1325 audit(1765893690.293:846): table=filter:144 family=2 entries=26 op=nft_register_rule pid=5146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:01:30.345629 kernel: audit: type=1300 audit(1765893690.293:846): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc6d2d63a0 a2=0 a3=7ffc6d2d638c items=0 ppid=3030 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:30.293000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:01:30.377996 kernel: audit: type=1327 audit(1765893690.293:846): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:01:30.378124 kernel: audit: type=1325 audit(1765893690.354:847): table=nat:145 family=2 entries=104 op=nft_register_chain pid=5146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:01:30.354000 audit[5146]: NETFILTER_CFG table=nat:145 family=2 entries=104 op=nft_register_chain pid=5146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 14:01:30.354000 audit[5146]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc6d2d63a0 a2=0 a3=7ffc6d2d638c items=0 ppid=3030 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:30.411932 kernel: audit: type=1300 audit(1765893690.354:847): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc6d2d63a0 a2=0 a3=7ffc6d2d638c items=0 ppid=3030 pid=5146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:30.412083 kernel: audit: type=1327 audit(1765893690.354:847): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:01:30.354000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 14:01:30.450000 audit[5142]: USER_ACCT pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:30.452525 sshd[5142]: Accepted publickey for core from 139.178.68.195 port 51330 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:01:30.455545 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:01:30.472786 systemd-logind[1569]: New session 24 of user core. Dec 16 14:01:30.482871 kernel: audit: type=1101 audit(1765893690.450:848): pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:30.453000 audit[5142]: CRED_ACQ pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:30.484118 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 14:01:30.526410 kernel: audit: type=1103 audit(1765893690.453:849): pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:30.526669 kernel: audit: type=1006 audit(1765893690.453:850): pid=5142 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 14:01:30.453000 audit[5142]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6beaac80 a2=3 a3=0 items=0 ppid=1 pid=5142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:30.453000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:30.506000 audit[5142]: USER_START pid=5142 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:30.513000 audit[5148]: CRED_ACQ pid=5148 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:30.721364 sshd[5148]: Connection closed by 139.178.68.195 port 51330 Dec 16 14:01:30.722095 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Dec 16 14:01:30.724000 audit[5142]: USER_END pid=5142 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:30.724000 audit[5142]: CRED_DISP pid=5142 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:30.730314 systemd-logind[1569]: Session 24 logged out. Waiting for processes to exit. Dec 16 14:01:30.730950 systemd[1]: sshd@22-10.128.0.79:22-139.178.68.195:51330.service: Deactivated successfully. Dec 16 14:01:30.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.79:22-139.178.68.195:51330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:30.735118 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 14:01:30.738376 systemd-logind[1569]: Removed session 24. Dec 16 14:01:31.030923 kubelet[2843]: E1216 14:01:31.030667 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5bfd867b-v5ftq" podUID="4e05f7b2-49a2-4ebc-8e80-5e6f910c574a" Dec 16 14:01:35.031468 kubelet[2843]: E1216 14:01:35.031374 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d8d578c6b-htjdj" podUID="1769a332-0974-4355-84f4-605660a8e93f" Dec 16 14:01:35.033809 kubelet[2843]: E1216 14:01:35.032949 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-ds2n7" podUID="d41be582-68e3-4041-abac-e335f6c6ba13" Dec 16 14:01:35.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.79:22-139.178.68.195:58560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:35.774519 systemd[1]: Started sshd@23-10.128.0.79:22-139.178.68.195:58560.service - OpenSSH per-connection server daemon (139.178.68.195:58560). Dec 16 14:01:35.780815 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 14:01:35.780932 kernel: audit: type=1130 audit(1765893695.774:856): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.79:22-139.178.68.195:58560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:36.071000 audit[5160]: USER_ACCT pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:36.083288 systemd-logind[1569]: New session 25 of user core. Dec 16 14:01:36.074926 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:01:36.089723 sshd[5160]: Accepted publickey for core from 139.178.68.195 port 58560 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:01:36.104785 kernel: audit: type=1101 audit(1765893696.071:857): pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:36.104912 kernel: audit: type=1103 audit(1765893696.072:858): pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:36.072000 audit[5160]: CRED_ACQ pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:36.129850 kernel: audit: type=1006 audit(1765893696.072:859): pid=5160 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 14:01:36.146020 kernel: audit: type=1300 audit(1765893696.072:859): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff32e50a00 a2=3 a3=0 items=0 ppid=1 pid=5160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:36.072000 audit[5160]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff32e50a00 a2=3 a3=0 items=0 ppid=1 pid=5160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:36.182251 kernel: audit: type=1327 audit(1765893696.072:859): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:36.072000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:36.176676 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 14:01:36.184000 audit[5160]: USER_START pid=5160 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:36.221855 kernel: audit: type=1105 audit(1765893696.184:860): pid=5160 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:36.224379 kernel: audit: type=1103 audit(1765893696.187:861): pid=5164 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:36.187000 audit[5164]: CRED_ACQ pid=5164 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:36.366845 sshd[5164]: Connection closed by 139.178.68.195 port 58560 Dec 16 14:01:36.368116 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Dec 16 14:01:36.370000 audit[5160]: USER_END pid=5160 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:36.386686 systemd[1]: sshd@23-10.128.0.79:22-139.178.68.195:58560.service: Deactivated successfully. Dec 16 14:01:36.392223 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 14:01:36.396012 systemd-logind[1569]: Session 25 logged out. Waiting for processes to exit. Dec 16 14:01:36.400223 systemd-logind[1569]: Removed session 25. Dec 16 14:01:36.370000 audit[5160]: CRED_DISP pid=5160 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:36.432767 kernel: audit: type=1106 audit(1765893696.370:862): pid=5160 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:36.432883 kernel: audit: type=1104 audit(1765893696.370:863): pid=5160 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:36.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.79:22-139.178.68.195:58560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:38.036774 kubelet[2843]: E1216 14:01:38.035864 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s56z" podUID="fff9659c-3470-45ea-9613-14efa791d03c" Dec 16 14:01:40.039203 kubelet[2843]: E1216 14:01:40.039145 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fdrzj" podUID="61b00035-2995-44f7-ae62-3ec89692e439" Dec 16 14:01:41.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.79:22-139.178.68.195:48316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:41.431189 systemd[1]: Started sshd@24-10.128.0.79:22-139.178.68.195:48316.service - OpenSSH per-connection server daemon (139.178.68.195:48316). Dec 16 14:01:41.446162 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 14:01:41.446283 kernel: audit: type=1130 audit(1765893701.430:865): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.79:22-139.178.68.195:48316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:41.755673 sshd[5178]: Accepted publickey for core from 139.178.68.195 port 48316 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:01:41.754000 audit[5178]: USER_ACCT pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:41.787798 kernel: audit: type=1101 audit(1765893701.754:866): pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:41.790098 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:01:41.786000 audit[5178]: CRED_ACQ pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:41.825780 kernel: audit: type=1103 audit(1765893701.786:867): pid=5178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:41.786000 audit[5178]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe432b82b0 a2=3 a3=0 items=0 ppid=1 pid=5178 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:41.872264 kernel: audit: type=1006 audit(1765893701.786:868): pid=5178 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 14:01:41.872387 kernel: audit: type=1300 audit(1765893701.786:868): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe432b82b0 a2=3 a3=0 items=0 ppid=1 pid=5178 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:41.786000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:41.885774 kernel: audit: type=1327 audit(1765893701.786:868): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:41.887869 systemd-logind[1569]: New session 26 of user core. Dec 16 14:01:41.891003 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 14:01:41.895000 audit[5178]: USER_START pid=5178 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:41.933764 kernel: audit: type=1105 audit(1765893701.895:869): pid=5178 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:41.933891 kernel: audit: type=1103 audit(1765893701.932:870): pid=5182 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:41.932000 audit[5182]: CRED_ACQ pid=5182 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:42.153777 sshd[5182]: Connection closed by 139.178.68.195 port 48316 Dec 16 14:01:42.154071 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Dec 16 14:01:42.158000 audit[5178]: USER_END pid=5178 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:42.166040 systemd[1]: sshd@24-10.128.0.79:22-139.178.68.195:48316.service: Deactivated successfully. Dec 16 14:01:42.170199 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 14:01:42.172591 systemd-logind[1569]: Session 26 logged out. Waiting for processes to exit. Dec 16 14:01:42.175207 systemd-logind[1569]: Removed session 26. Dec 16 14:01:42.196894 kernel: audit: type=1106 audit(1765893702.158:871): pid=5178 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:42.159000 audit[5178]: CRED_DISP pid=5178 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:42.223775 kernel: audit: type=1104 audit(1765893702.159:872): pid=5178 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:42.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.79:22-139.178.68.195:48316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:43.032852 kubelet[2843]: E1216 14:01:43.032795 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-v9gp2" podUID="3ee33909-6767-4f65-befa-f64702fcbe38" Dec 16 14:01:46.036772 containerd[1603]: time="2025-12-16T14:01:46.036077946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 14:01:46.216410 containerd[1603]: time="2025-12-16T14:01:46.216101570Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:01:46.217822 containerd[1603]: time="2025-12-16T14:01:46.217695911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 14:01:46.218065 containerd[1603]: time="2025-12-16T14:01:46.217721055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 14:01:46.218599 kubelet[2843]: E1216 14:01:46.218285 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:01:46.218599 kubelet[2843]: E1216 14:01:46.218359 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 14:01:46.218599 kubelet[2843]: E1216 14:01:46.218524 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a4506af38c3640db88d3886b67f3e843,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wgctf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b5bfd867b-v5ftq_calico-system(4e05f7b2-49a2-4ebc-8e80-5e6f910c574a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 14:01:46.221284 containerd[1603]: time="2025-12-16T14:01:46.221160975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 14:01:46.383804 containerd[1603]: time="2025-12-16T14:01:46.382692637Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 14:01:46.384765 containerd[1603]: time="2025-12-16T14:01:46.384200995Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 14:01:46.384765 containerd[1603]: time="2025-12-16T14:01:46.384322314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 14:01:46.384970 kubelet[2843]: E1216 14:01:46.384566 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:01:46.384970 kubelet[2843]: E1216 14:01:46.384626 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 14:01:46.385715 kubelet[2843]: E1216 14:01:46.385647 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgctf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b5bfd867b-v5ftq_calico-system(4e05f7b2-49a2-4ebc-8e80-5e6f910c574a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 14:01:46.386903 kubelet[2843]: E1216 14:01:46.386851 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b5bfd867b-v5ftq" podUID="4e05f7b2-49a2-4ebc-8e80-5e6f910c574a" Dec 16 14:01:47.030858 kubelet[2843]: E1216 14:01:47.030647 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66cd68bd7b-ds2n7" podUID="d41be582-68e3-4041-abac-e335f6c6ba13" Dec 16 14:01:47.030858 kubelet[2843]: E1216 14:01:47.030719 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d8d578c6b-htjdj" podUID="1769a332-0974-4355-84f4-605660a8e93f" Dec 16 14:01:47.206579 systemd[1]: Started sshd@25-10.128.0.79:22-139.178.68.195:48320.service - OpenSSH per-connection server daemon (139.178.68.195:48320). Dec 16 14:01:47.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.128.0.79:22-139.178.68.195:48320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:47.213177 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 14:01:47.213305 kernel: audit: type=1130 audit(1765893707.206:874): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.128.0.79:22-139.178.68.195:48320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:47.532000 audit[5194]: USER_ACCT pid=5194 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:47.537881 sshd-session[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 14:01:47.550331 sshd[5194]: Accepted publickey for core from 139.178.68.195 port 48320 ssh2: RSA SHA256:gyx9J5/0UWaEJCjt8mh0YGJi0BpO1TeaKdfoKLc64fk Dec 16 14:01:47.563787 kernel: audit: type=1101 audit(1765893707.532:875): pid=5194 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:47.534000 audit[5194]: CRED_ACQ pid=5194 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:47.574041 systemd-logind[1569]: New session 27 of user core. Dec 16 14:01:47.600346 kernel: audit: type=1103 audit(1765893707.534:876): pid=5194 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:47.646716 kernel: audit: type=1006 audit(1765893707.534:877): pid=5194 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 14:01:47.646895 kernel: audit: type=1300 audit(1765893707.534:877): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff884b4f70 a2=3 a3=0 items=0 ppid=1 pid=5194 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:47.534000 audit[5194]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff884b4f70 a2=3 a3=0 items=0 ppid=1 pid=5194 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 14:01:47.534000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:47.649064 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 14:01:47.658801 kernel: audit: type=1327 audit(1765893707.534:877): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 14:01:47.662000 audit[5194]: USER_START pid=5194 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:47.700111 kernel: audit: type=1105 audit(1765893707.662:878): pid=5194 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:47.700000 audit[5204]: CRED_ACQ pid=5204 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:47.727805 kernel: audit: type=1103 audit(1765893707.700:879): pid=5204 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:47.923638 sshd[5204]: Connection closed by 139.178.68.195 port 48320 Dec 16 14:01:47.924612 sshd-session[5194]: pam_unix(sshd:session): session closed for user core Dec 16 14:01:47.928000 audit[5194]: USER_END pid=5194 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:47.935223 systemd-logind[1569]: Session 27 logged out. Waiting for processes to exit. Dec 16 14:01:47.937205 systemd[1]: sshd@25-10.128.0.79:22-139.178.68.195:48320.service: Deactivated successfully. Dec 16 14:01:47.942350 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 14:01:47.946976 systemd-logind[1569]: Removed session 27. Dec 16 14:01:47.966986 kernel: audit: type=1106 audit(1765893707.928:880): pid=5194 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:47.928000 audit[5194]: CRED_DISP pid=5194 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Dec 16 14:01:47.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.128.0.79:22-139.178.68.195:48320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 14:01:47.999990 kernel: audit: type=1104 audit(1765893707.928:881): pid=5194 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success'