Dec 16 03:25:14.999419 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:18:19 -00 2025 Dec 16 03:25:14.999465 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:25:14.999490 kernel: BIOS-provided physical RAM map: Dec 16 03:25:14.999551 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 16 03:25:14.999568 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 16 03:25:14.999584 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 16 03:25:14.999604 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 16 03:25:14.999622 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 16 03:25:14.999640 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd318fff] usable Dec 16 03:25:14.999678 kernel: BIOS-e820: [mem 0x00000000bd319000-0x00000000bd322fff] ACPI data Dec 16 03:25:14.999696 kernel: BIOS-e820: [mem 0x00000000bd323000-0x00000000bf8ecfff] usable Dec 16 03:25:14.999714 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 16 03:25:14.999731 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 16 03:25:14.999748 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 16 03:25:14.999805 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 16 03:25:14.999824 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 16 03:25:14.999843 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 16 03:25:14.999862 kernel: NX (Execute Disable) protection: active Dec 16 03:25:14.999902 kernel: APIC: Static calls initialized Dec 16 03:25:14.999939 kernel: efi: EFI v2.7 by EDK II Dec 16 03:25:14.999957 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd323018 RNG=0xbfb73018 TPMEventLog=0xbd319018 Dec 16 03:25:14.999976 kernel: random: crng init done Dec 16 03:25:14.999996 kernel: secureboot: Secure boot disabled Dec 16 03:25:15.000014 kernel: SMBIOS 2.4 present. Dec 16 03:25:15.000039 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Dec 16 03:25:15.000059 kernel: DMI: Memory slots populated: 1/1 Dec 16 03:25:15.000078 kernel: Hypervisor detected: KVM Dec 16 03:25:15.000096 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 16 03:25:15.000114 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 03:25:15.000134 kernel: kvm-clock: using sched offset of 11375646138 cycles Dec 16 03:25:15.000154 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 03:25:15.000175 kernel: tsc: Detected 2299.998 MHz processor Dec 16 03:25:15.000218 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 03:25:15.000244 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 03:25:15.000263 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 16 03:25:15.000400 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 16 03:25:15.000422 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 03:25:15.000440 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 16 03:25:15.000458 kernel: Using GB pages for direct mapping Dec 16 03:25:15.000477 kernel: ACPI: Early table checksum verification disabled Dec 16 03:25:15.000511 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 16 03:25:15.000531 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 16 03:25:15.000551 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 16 03:25:15.000571 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 16 03:25:15.000589 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 16 03:25:15.000612 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Dec 16 03:25:15.000631 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 16 03:25:15.000651 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 16 03:25:15.000670 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 16 03:25:15.000690 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 16 03:25:15.000709 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 16 03:25:15.000731 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 16 03:25:15.000751 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 16 03:25:15.000770 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 16 03:25:15.000789 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 16 03:25:15.000808 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 16 03:25:15.000828 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 16 03:25:15.000847 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 16 03:25:15.000866 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 16 03:25:15.000888 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 16 03:25:15.000906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 03:25:15.000925 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 16 03:25:15.000943 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 16 03:25:15.000962 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Dec 16 03:25:15.000980 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Dec 16 03:25:15.001000 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Dec 16 03:25:15.001023 kernel: Zone ranges: Dec 16 03:25:15.001041 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 03:25:15.001060 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 03:25:15.001079 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 16 03:25:15.001098 kernel: Device empty Dec 16 03:25:15.001118 kernel: Movable zone start for each node Dec 16 03:25:15.001137 kernel: Early memory node ranges Dec 16 03:25:15.001161 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 16 03:25:15.001180 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 16 03:25:15.001220 kernel: node 0: [mem 0x0000000000100000-0x00000000bd318fff] Dec 16 03:25:15.001239 kernel: node 0: [mem 0x00000000bd323000-0x00000000bf8ecfff] Dec 16 03:25:15.001267 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 16 03:25:15.001296 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 16 03:25:15.001316 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 16 03:25:15.001336 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 03:25:15.001361 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 16 03:25:15.001381 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 16 03:25:15.001401 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 16 03:25:15.001428 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 16 03:25:15.001447 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 16 03:25:15.001466 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 16 03:25:15.001486 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 03:25:15.001510 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 03:25:15.001529 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 03:25:15.001549 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 03:25:15.001569 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 03:25:15.001588 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 03:25:15.001608 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 03:25:15.001628 kernel: CPU topo: Max. logical packages: 1 Dec 16 03:25:15.001651 kernel: CPU topo: Max. logical dies: 1 Dec 16 03:25:15.001670 kernel: CPU topo: Max. dies per package: 1 Dec 16 03:25:15.001689 kernel: CPU topo: Max. threads per core: 2 Dec 16 03:25:15.001709 kernel: CPU topo: Num. cores per package: 1 Dec 16 03:25:15.001729 kernel: CPU topo: Num. threads per package: 2 Dec 16 03:25:15.001746 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 03:25:15.001766 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 16 03:25:15.001785 kernel: Booting paravirtualized kernel on KVM Dec 16 03:25:15.001809 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 03:25:15.001830 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 03:25:15.001849 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 03:25:15.001869 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 03:25:15.001888 kernel: pcpu-alloc: [0] 0 1 Dec 16 03:25:15.001907 kernel: kvm-guest: PV spinlocks enabled Dec 16 03:25:15.001927 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 03:25:15.001953 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:25:15.001973 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 03:25:15.001993 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 03:25:15.002013 kernel: Fallback order for Node 0: 0 Dec 16 03:25:15.002032 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Dec 16 03:25:15.002052 kernel: Policy zone: Normal Dec 16 03:25:15.002072 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 03:25:15.002095 kernel: software IO TLB: area num 2. Dec 16 03:25:15.002129 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 03:25:15.002153 kernel: Kernel/User page tables isolation: enabled Dec 16 03:25:15.002174 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 03:25:15.002221 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 03:25:15.002240 kernel: Dynamic Preempt: voluntary Dec 16 03:25:15.002260 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 03:25:15.002295 kernel: rcu: RCU event tracing is enabled. Dec 16 03:25:15.002316 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 03:25:15.002341 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 03:25:15.002362 kernel: Rude variant of Tasks RCU enabled. Dec 16 03:25:15.002382 kernel: Tracing variant of Tasks RCU enabled. Dec 16 03:25:15.002403 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 03:25:15.002431 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 03:25:15.002451 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:25:15.002472 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:25:15.002493 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:25:15.002513 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 03:25:15.002534 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 03:25:15.002554 kernel: Console: colour dummy device 80x25 Dec 16 03:25:15.002578 kernel: printk: legacy console [ttyS0] enabled Dec 16 03:25:15.002597 kernel: ACPI: Core revision 20240827 Dec 16 03:25:15.002618 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 03:25:15.002638 kernel: x2apic enabled Dec 16 03:25:15.002658 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 03:25:15.002679 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 16 03:25:15.002699 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 16 03:25:15.002723 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 16 03:25:15.002743 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 16 03:25:15.002764 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 16 03:25:15.002784 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 03:25:15.002804 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Dec 16 03:25:15.002824 kernel: Spectre V2 : Mitigation: IBRS Dec 16 03:25:15.002845 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 03:25:15.002868 kernel: RETBleed: Mitigation: IBRS Dec 16 03:25:15.002888 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 03:25:15.002908 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 16 03:25:15.002929 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 03:25:15.002949 kernel: MDS: Mitigation: Clear CPU buffers Dec 16 03:25:15.002970 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 03:25:15.002990 kernel: active return thunk: its_return_thunk Dec 16 03:25:15.003014 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 03:25:15.003033 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 03:25:15.003053 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 03:25:15.003074 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 03:25:15.003094 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 03:25:15.003115 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 16 03:25:15.003135 kernel: Freeing SMP alternatives memory: 32K Dec 16 03:25:15.003155 kernel: pid_max: default: 32768 minimum: 301 Dec 16 03:25:15.003179 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 03:25:15.003213 kernel: landlock: Up and running. Dec 16 03:25:15.003233 kernel: SELinux: Initializing. Dec 16 03:25:15.003254 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 03:25:15.003274 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 03:25:15.003301 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 16 03:25:15.003322 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 16 03:25:15.003346 kernel: signal: max sigframe size: 1776 Dec 16 03:25:15.003366 kernel: rcu: Hierarchical SRCU implementation. Dec 16 03:25:15.003386 kernel: rcu: Max phase no-delay instances is 400. Dec 16 03:25:15.003406 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 03:25:15.003427 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 03:25:15.003447 kernel: smp: Bringing up secondary CPUs ... Dec 16 03:25:15.003467 kernel: smpboot: x86: Booting SMP configuration: Dec 16 03:25:15.003490 kernel: .... node #0, CPUs: #1 Dec 16 03:25:15.003511 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 16 03:25:15.003533 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 16 03:25:15.003553 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 03:25:15.003573 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 16 03:25:15.003594 kernel: Memory: 7580388K/7860544K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15556K init, 2484K bss, 274324K reserved, 0K cma-reserved) Dec 16 03:25:15.003618 kernel: devtmpfs: initialized Dec 16 03:25:15.003638 kernel: x86/mm: Memory block size: 128MB Dec 16 03:25:15.003658 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 16 03:25:15.003678 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 03:25:15.003699 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 03:25:15.003719 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 03:25:15.003739 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 03:25:15.003762 kernel: audit: initializing netlink subsys (disabled) Dec 16 03:25:15.003783 kernel: audit: type=2000 audit(1765855511.493:1): state=initialized audit_enabled=0 res=1 Dec 16 03:25:15.003802 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 03:25:15.003822 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 03:25:15.003842 kernel: cpuidle: using governor menu Dec 16 03:25:15.003862 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 03:25:15.003882 kernel: dca service started, version 1.12.1 Dec 16 03:25:15.003902 kernel: PCI: Using configuration type 1 for base access Dec 16 03:25:15.003926 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 03:25:15.003946 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 03:25:15.003966 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 03:25:15.003985 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 03:25:15.004005 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 03:25:15.004025 kernel: ACPI: Added _OSI(Module Device) Dec 16 03:25:15.004045 kernel: ACPI: Added _OSI(Processor Device) Dec 16 03:25:15.004068 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 03:25:15.004088 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 16 03:25:15.004109 kernel: ACPI: Interpreter enabled Dec 16 03:25:15.004129 kernel: ACPI: PM: (supports S0 S3 S5) Dec 16 03:25:15.004149 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 03:25:15.004169 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 03:25:15.004203 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 16 03:25:15.004226 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 16 03:25:15.004268 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 03:25:15.004667 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 16 03:25:15.004955 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 16 03:25:15.005251 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 16 03:25:15.005291 kernel: PCI host bridge to bus 0000:00 Dec 16 03:25:15.005552 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 03:25:15.005797 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 03:25:15.006066 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 03:25:15.006334 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 16 03:25:15.006572 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 03:25:15.006857 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 16 03:25:15.007130 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Dec 16 03:25:15.007540 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Dec 16 03:25:15.007819 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 16 03:25:15.008097 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Dec 16 03:25:15.008399 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 16 03:25:15.008667 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Dec 16 03:25:15.008938 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 03:25:15.009219 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Dec 16 03:25:15.009700 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Dec 16 03:25:15.010062 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 03:25:15.010448 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Dec 16 03:25:15.010784 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Dec 16 03:25:15.010812 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 03:25:15.010836 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 03:25:15.010858 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 03:25:15.010881 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 03:25:15.010922 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 16 03:25:15.010944 kernel: iommu: Default domain type: Translated Dec 16 03:25:15.010967 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 03:25:15.010990 kernel: efivars: Registered efivars operations Dec 16 03:25:15.011013 kernel: PCI: Using ACPI for IRQ routing Dec 16 03:25:15.011035 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 03:25:15.011057 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 16 03:25:15.011084 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 16 03:25:15.011106 kernel: e820: reserve RAM buffer [mem 0xbd319000-0xbfffffff] Dec 16 03:25:15.011128 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 16 03:25:15.011150 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 16 03:25:15.011172 kernel: vgaarb: loaded Dec 16 03:25:15.011211 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 03:25:15.011230 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 03:25:15.011250 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 03:25:15.011283 kernel: pnp: PnP ACPI init Dec 16 03:25:15.011304 kernel: pnp: PnP ACPI: found 7 devices Dec 16 03:25:15.011325 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 03:25:15.011345 kernel: NET: Registered PF_INET protocol family Dec 16 03:25:15.011365 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 03:25:15.011385 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 16 03:25:15.011406 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 03:25:15.011431 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 03:25:15.011453 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 03:25:15.011475 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 16 03:25:15.011497 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 03:25:15.011519 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 03:25:15.011541 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 03:25:15.011564 kernel: NET: Registered PF_XDP protocol family Dec 16 03:25:15.011896 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 03:25:15.012264 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 03:25:15.012593 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 03:25:15.012897 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 16 03:25:15.013246 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 03:25:15.013288 kernel: PCI: CLS 0 bytes, default 64 Dec 16 03:25:15.013310 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 03:25:15.013332 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 16 03:25:15.013353 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 03:25:15.013375 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 16 03:25:15.013395 kernel: clocksource: Switched to clocksource tsc Dec 16 03:25:15.013417 kernel: Initialise system trusted keyrings Dec 16 03:25:15.013443 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 16 03:25:15.013464 kernel: Key type asymmetric registered Dec 16 03:25:15.013485 kernel: Asymmetric key parser 'x509' registered Dec 16 03:25:15.013506 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 03:25:15.013527 kernel: io scheduler mq-deadline registered Dec 16 03:25:15.013548 kernel: io scheduler kyber registered Dec 16 03:25:15.013568 kernel: io scheduler bfq registered Dec 16 03:25:15.013594 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 03:25:15.013616 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 16 03:25:15.013952 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 16 03:25:15.013982 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 16 03:25:15.014345 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 16 03:25:15.014374 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 16 03:25:15.014708 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 16 03:25:15.014742 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 03:25:15.014766 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 03:25:15.014788 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 16 03:25:15.014811 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 16 03:25:15.014833 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 16 03:25:15.015324 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 16 03:25:15.015354 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 03:25:15.015382 kernel: i8042: Warning: Keylock active Dec 16 03:25:15.015401 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 03:25:15.015420 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 03:25:15.017637 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 16 03:25:15.017996 kernel: rtc_cmos 00:00: registered as rtc0 Dec 16 03:25:15.018339 kernel: rtc_cmos 00:00: setting system clock to 2025-12-16T03:25:13 UTC (1765855513) Dec 16 03:25:15.018663 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 16 03:25:15.018689 kernel: intel_pstate: CPU model not supported Dec 16 03:25:15.018711 kernel: pstore: Using crash dump compression: deflate Dec 16 03:25:15.018732 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 03:25:15.018753 kernel: NET: Registered PF_INET6 protocol family Dec 16 03:25:15.018773 kernel: Segment Routing with IPv6 Dec 16 03:25:15.018795 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 03:25:15.018822 kernel: NET: Registered PF_PACKET protocol family Dec 16 03:25:15.018843 kernel: Key type dns_resolver registered Dec 16 03:25:15.018863 kernel: IPI shorthand broadcast: enabled Dec 16 03:25:15.018884 kernel: sched_clock: Marking stable (1880004655, 133539110)->(2026479716, -12935951) Dec 16 03:25:15.018905 kernel: registered taskstats version 1 Dec 16 03:25:15.018926 kernel: Loading compiled-in X.509 certificates Dec 16 03:25:15.018946 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: aafd1eb27ea805b8231c3bede9210239fae84df8' Dec 16 03:25:15.018973 kernel: Demotion targets for Node 0: null Dec 16 03:25:15.018994 kernel: Key type .fscrypt registered Dec 16 03:25:15.019015 kernel: Key type fscrypt-provisioning registered Dec 16 03:25:15.019035 kernel: ima: Allocated hash algorithm: sha1 Dec 16 03:25:15.019056 kernel: ima: Can not allocate sha384 (reason: -2) Dec 16 03:25:15.019078 kernel: ima: No architecture policies found Dec 16 03:25:15.019098 kernel: clk: Disabling unused clocks Dec 16 03:25:15.019124 kernel: Freeing unused kernel image (initmem) memory: 15556K Dec 16 03:25:15.019145 kernel: Write protecting the kernel read-only data: 47104k Dec 16 03:25:15.019166 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 16 03:25:15.019338 kernel: Run /init as init process Dec 16 03:25:15.019361 kernel: with arguments: Dec 16 03:25:15.019382 kernel: /init Dec 16 03:25:15.019526 kernel: with environment: Dec 16 03:25:15.019546 kernel: HOME=/ Dec 16 03:25:15.019575 kernel: TERM=linux Dec 16 03:25:15.019596 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 03:25:15.019726 kernel: SCSI subsystem initialized Dec 16 03:25:15.020101 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Dec 16 03:25:15.022560 kernel: scsi host0: Virtio SCSI HBA Dec 16 03:25:15.022957 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 16 03:25:15.023350 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Dec 16 03:25:15.023717 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 16 03:25:15.024101 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 16 03:25:15.024514 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 16 03:25:15.024878 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 16 03:25:15.024918 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 03:25:15.024966 kernel: GPT:25804799 != 33554431 Dec 16 03:25:15.024993 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 03:25:15.025015 kernel: GPT:25804799 != 33554431 Dec 16 03:25:15.025036 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 03:25:15.025058 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 03:25:15.027503 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 16 03:25:15.027541 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 03:25:15.027565 kernel: device-mapper: uevent: version 1.0.3 Dec 16 03:25:15.027588 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 03:25:15.027611 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 03:25:15.027631 kernel: raid6: avx2x4 gen() 18183 MB/s Dec 16 03:25:15.027654 kernel: raid6: avx2x2 gen() 18142 MB/s Dec 16 03:25:15.027685 kernel: raid6: avx2x1 gen() 14006 MB/s Dec 16 03:25:15.027707 kernel: raid6: using algorithm avx2x4 gen() 18183 MB/s Dec 16 03:25:15.027729 kernel: raid6: .... xor() 7946 MB/s, rmw enabled Dec 16 03:25:15.027752 kernel: raid6: using avx2x2 recovery algorithm Dec 16 03:25:15.027775 kernel: xor: automatically using best checksumming function avx Dec 16 03:25:15.027797 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 03:25:15.027820 kernel: BTRFS: device fsid 57a8262f-2900-48ba-a17e-aafbd70d59c7 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (155) Dec 16 03:25:15.027844 kernel: BTRFS info (device dm-0): first mount of filesystem 57a8262f-2900-48ba-a17e-aafbd70d59c7 Dec 16 03:25:15.027872 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:25:15.027895 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 03:25:15.027918 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 03:25:15.027940 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 03:25:15.027962 kernel: loop: module loaded Dec 16 03:25:15.027984 kernel: loop0: detected capacity change from 0 to 100528 Dec 16 03:25:15.028007 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 03:25:15.028045 systemd[1]: Successfully made /usr/ read-only. Dec 16 03:25:15.028074 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:25:15.028100 systemd[1]: Detected virtualization google. Dec 16 03:25:15.028128 systemd[1]: Detected architecture x86-64. Dec 16 03:25:15.028150 systemd[1]: Running in initrd. Dec 16 03:25:15.028178 systemd[1]: No hostname configured, using default hostname. Dec 16 03:25:15.028268 systemd[1]: Hostname set to . Dec 16 03:25:15.028292 systemd[1]: Initializing machine ID from random generator. Dec 16 03:25:15.028326 systemd[1]: Queued start job for default target initrd.target. Dec 16 03:25:15.028349 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:25:15.028372 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:25:15.028396 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:25:15.028427 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 03:25:15.028452 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:25:15.028477 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 03:25:15.028502 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 03:25:15.028526 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:25:15.028555 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:25:15.028579 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:25:15.028604 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:25:15.028628 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:25:15.028657 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:25:15.028686 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:25:15.028710 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:25:15.028735 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:25:15.028759 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:25:15.028783 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 03:25:15.028807 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 03:25:15.028836 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:25:15.028861 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:25:15.028885 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:25:15.028909 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:25:15.028934 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 03:25:15.028958 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 03:25:15.028983 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:25:15.029012 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 03:25:15.029045 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 03:25:15.029070 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 03:25:15.029094 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:25:15.029118 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:25:15.029149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:25:15.029173 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 03:25:15.029288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:25:15.033089 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 03:25:15.033120 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:25:15.033223 systemd-journald[292]: Collecting audit messages is enabled. Dec 16 03:25:15.033281 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:25:15.033306 kernel: audit: type=1130 audit(1765855515.014:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.033336 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 03:25:15.033361 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:25:15.033385 systemd-journald[292]: Journal started Dec 16 03:25:15.033426 systemd-journald[292]: Runtime Journal (/run/log/journal/59cdb7b2cdc646b0a0b9ec9897f3c017) is 8M, max 148.4M, 140.4M free. Dec 16 03:25:15.036686 kernel: Bridge firewalling registered Dec 16 03:25:15.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.034654 systemd-modules-load[293]: Inserted module 'br_netfilter' Dec 16 03:25:15.039454 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:25:15.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.045246 kernel: audit: type=1130 audit(1765855515.039:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.045294 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:25:15.048242 kernel: audit: type=1130 audit(1765855515.044:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.053439 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:25:15.056071 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:25:15.063406 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:25:15.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.067234 kernel: audit: type=1130 audit(1765855515.062:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.086509 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:25:15.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.097416 kernel: audit: type=1130 audit(1765855515.091:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.100415 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 03:25:15.102384 systemd-tmpfiles[310]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 03:25:15.108357 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:25:15.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.122216 kernel: audit: type=1130 audit(1765855515.116:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.122277 kernel: audit: type=1130 audit(1765855515.119:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.120438 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:25:15.130320 kernel: audit: type=1334 audit(1765855515.119:9): prog-id=6 op=LOAD Dec 16 03:25:15.119000 audit: BPF prog-id=6 op=LOAD Dec 16 03:25:15.123699 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:25:15.159626 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:25:15.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.170234 kernel: audit: type=1130 audit(1765855515.165:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.173891 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 03:25:15.204770 systemd-resolved[319]: Positive Trust Anchors: Dec 16 03:25:15.205289 systemd-resolved[319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:25:15.205300 systemd-resolved[319]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:25:15.205512 systemd-resolved[319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:25:15.227041 dracut-cmdline[331]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:25:15.256440 systemd-resolved[319]: Defaulting to hostname 'linux'. Dec 16 03:25:15.258904 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:25:15.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.263432 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:25:15.353231 kernel: Loading iSCSI transport class v2.0-870. Dec 16 03:25:15.370240 kernel: iscsi: registered transport (tcp) Dec 16 03:25:15.398226 kernel: iscsi: registered transport (qla4xxx) Dec 16 03:25:15.398285 kernel: QLogic iSCSI HBA Driver Dec 16 03:25:15.432990 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:25:15.450530 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:25:15.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.452563 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:25:15.517619 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 03:25:15.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.522556 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 03:25:15.531388 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 03:25:15.576724 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:25:15.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.582000 audit: BPF prog-id=7 op=LOAD Dec 16 03:25:15.582000 audit: BPF prog-id=8 op=LOAD Dec 16 03:25:15.586442 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:25:15.634847 systemd-udevd[574]: Using default interface naming scheme 'v257'. Dec 16 03:25:15.658293 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:25:15.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.665045 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 03:25:15.702899 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:25:15.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.707000 audit: BPF prog-id=9 op=LOAD Dec 16 03:25:15.709472 dracut-pre-trigger[652]: rd.md=0: removing MD RAID activation Dec 16 03:25:15.711084 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:25:15.760285 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:25:15.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.768902 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:25:15.794788 systemd-networkd[679]: lo: Link UP Dec 16 03:25:15.795276 systemd-networkd[679]: lo: Gained carrier Dec 16 03:25:15.796523 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:25:15.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.803420 systemd[1]: Reached target network.target - Network. Dec 16 03:25:15.890135 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:25:15.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:15.896821 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 03:25:16.102695 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 16 03:25:16.128639 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 03:25:16.167918 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 16 03:25:16.191299 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 16 03:25:16.209517 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 16 03:25:16.236897 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 03:25:16.237076 kernel: AES CTR mode by8 optimization enabled Dec 16 03:25:16.278767 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 03:25:16.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:16.299456 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:25:16.299674 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:25:16.311336 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:25:16.312471 systemd-networkd[679]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:25:16.312478 systemd-networkd[679]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:25:16.313704 systemd-networkd[679]: eth0: Link UP Dec 16 03:25:16.314062 systemd-networkd[679]: eth0: Gained carrier Dec 16 03:25:16.314082 systemd-networkd[679]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:25:16.321616 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:25:16.327284 systemd-networkd[679]: eth0: DHCPv4 address 10.128.0.16/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 16 03:25:16.379266 disk-uuid[808]: Primary Header is updated. Dec 16 03:25:16.379266 disk-uuid[808]: Secondary Entries is updated. Dec 16 03:25:16.379266 disk-uuid[808]: Secondary Header is updated. Dec 16 03:25:16.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:16.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:16.395064 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 03:25:16.455997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:25:16.525569 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:25:16.546312 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:25:16.556689 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:25:16.575640 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 03:25:16.626391 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:25:16.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:17.452335 disk-uuid[809]: Warning: The kernel is still using the old partition table. Dec 16 03:25:17.452335 disk-uuid[809]: The new table will be used at the next reboot or after you Dec 16 03:25:17.452335 disk-uuid[809]: run partprobe(8) or kpartx(8) Dec 16 03:25:17.452335 disk-uuid[809]: The operation has completed successfully. Dec 16 03:25:17.544299 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 16 03:25:17.544342 kernel: audit: type=1130 audit(1765855517.469:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:17.544381 kernel: audit: type=1131 audit(1765855517.469:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:17.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:17.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:17.459714 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 03:25:17.459860 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 03:25:17.471965 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 03:25:17.593358 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (835) Dec 16 03:25:17.611209 kernel: BTRFS info (device sda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:25:17.611263 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:25:17.629305 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 03:25:17.629369 kernel: BTRFS info (device sda6): turning on async discard Dec 16 03:25:17.629397 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 03:25:17.651227 kernel: BTRFS info (device sda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:25:17.651155 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 03:25:17.686324 kernel: audit: type=1130 audit(1765855517.650:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:17.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:17.655419 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 03:25:17.947396 ignition[854]: Ignition 2.24.0 Dec 16 03:25:17.947440 ignition[854]: Stage: fetch-offline Dec 16 03:25:17.988324 kernel: audit: type=1130 audit(1765855517.961:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:17.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:17.950863 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:25:17.947533 ignition[854]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:25:17.964889 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 03:25:17.947567 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 03:25:17.947747 ignition[854]: parsed url from cmdline: "" Dec 16 03:25:17.947761 ignition[854]: no config URL provided Dec 16 03:25:17.947871 ignition[854]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:25:17.947895 ignition[854]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:25:18.083346 kernel: audit: type=1130 audit(1765855518.055:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:18.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:18.042381 unknown[860]: fetched base config from "system" Dec 16 03:25:17.947910 ignition[854]: failed to fetch config: resource requires networking Dec 16 03:25:18.042394 unknown[860]: fetched base config from "system" Dec 16 03:25:17.949366 ignition[854]: Ignition finished successfully Dec 16 03:25:18.042405 unknown[860]: fetched user config from "gcp" Dec 16 03:25:18.031360 ignition[860]: Ignition 2.24.0 Dec 16 03:25:18.045843 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 03:25:18.163338 kernel: audit: type=1130 audit(1765855518.127:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:18.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:18.031370 ignition[860]: Stage: fetch Dec 16 03:25:18.059435 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 03:25:18.031598 ignition[860]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:25:18.126771 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 03:25:18.031614 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 03:25:18.240297 kernel: audit: type=1130 audit(1765855518.210:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:18.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:18.131435 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 03:25:18.031751 ignition[860]: parsed url from cmdline: "" Dec 16 03:25:18.202048 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 03:25:18.031756 ignition[860]: no config URL provided Dec 16 03:25:18.211482 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 03:25:18.031772 ignition[860]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:25:18.228360 systemd-networkd[679]: eth0: Gained IPv6LL Dec 16 03:25:18.031785 ignition[860]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:25:18.250462 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 03:25:18.031809 ignition[860]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 16 03:25:18.259514 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:25:18.034503 ignition[860]: GET result: OK Dec 16 03:25:18.277826 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:25:18.034689 ignition[860]: parsing config with SHA512: 7163fc7884ad952e67303a2dc3c16ca3e4c7cbeefdcc0b4685cd208d565d929396ed5ae323200d2abbb6486eabd12981cb347df82b1321bec5d66d5d00e3632e Dec 16 03:25:18.290530 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:25:18.042990 ignition[860]: fetch: fetch complete Dec 16 03:25:18.308995 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 03:25:18.042999 ignition[860]: fetch: fetch passed Dec 16 03:25:18.043056 ignition[860]: Ignition finished successfully Dec 16 03:25:18.123643 ignition[867]: Ignition 2.24.0 Dec 16 03:25:18.123656 ignition[867]: Stage: kargs Dec 16 03:25:18.123846 ignition[867]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:25:18.123860 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 03:25:18.124825 ignition[867]: kargs: kargs passed Dec 16 03:25:18.124881 ignition[867]: Ignition finished successfully Dec 16 03:25:18.199566 ignition[873]: Ignition 2.24.0 Dec 16 03:25:18.199574 ignition[873]: Stage: disks Dec 16 03:25:18.199758 ignition[873]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:25:18.199769 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 03:25:18.200846 ignition[873]: disks: disks passed Dec 16 03:25:18.200894 ignition[873]: Ignition finished successfully Dec 16 03:25:18.389641 systemd-fsck[881]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 03:25:18.489144 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 03:25:18.529386 kernel: audit: type=1130 audit(1765855518.488:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:18.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:18.491892 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 03:25:18.711248 kernel: EXT4-fs (sda9): mounted filesystem 1314c107-11a5-486b-9d52-be9f57b6bf1b r/w with ordered data mode. Quota mode: none. Dec 16 03:25:18.712316 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 03:25:18.719929 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 03:25:18.738475 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:25:18.754693 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 03:25:18.769175 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 03:25:18.842383 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (889) Dec 16 03:25:18.842426 kernel: BTRFS info (device sda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:25:18.842453 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:25:18.842478 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 03:25:18.842504 kernel: BTRFS info (device sda6): turning on async discard Dec 16 03:25:18.842522 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 03:25:18.769252 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 03:25:18.769287 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:25:18.785561 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 03:25:18.850714 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:25:18.875436 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 03:25:19.200761 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 03:25:19.238502 kernel: audit: type=1130 audit(1765855519.199:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:19.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:19.204348 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 03:25:19.248366 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 03:25:19.279499 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 03:25:19.296396 kernel: BTRFS info (device sda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:25:19.316084 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 03:25:19.352313 kernel: audit: type=1130 audit(1765855519.324:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:19.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:19.352440 ignition[986]: INFO : Ignition 2.24.0 Dec 16 03:25:19.352440 ignition[986]: INFO : Stage: mount Dec 16 03:25:19.352440 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:25:19.352440 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 03:25:19.352440 ignition[986]: INFO : mount: mount passed Dec 16 03:25:19.352440 ignition[986]: INFO : Ignition finished successfully Dec 16 03:25:19.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:19.333735 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 03:25:19.363002 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 03:25:19.714224 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:25:19.754344 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (997) Dec 16 03:25:19.771599 kernel: BTRFS info (device sda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:25:19.771672 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:25:19.787429 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 03:25:19.787489 kernel: BTRFS info (device sda6): turning on async discard Dec 16 03:25:19.787516 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 03:25:19.795776 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:25:19.841485 ignition[1014]: INFO : Ignition 2.24.0 Dec 16 03:25:19.841485 ignition[1014]: INFO : Stage: files Dec 16 03:25:19.854489 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:25:19.854489 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 03:25:19.854489 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Dec 16 03:25:19.854489 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 03:25:19.854489 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 03:25:19.854489 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 03:25:19.854489 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 03:25:19.854489 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 03:25:19.854347 unknown[1014]: wrote ssh authorized keys file for user: core Dec 16 03:25:19.947339 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 03:25:19.947339 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 03:25:19.979288 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 03:25:20.132429 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 03:25:20.148349 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 03:25:20.148349 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 03:25:20.148349 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:25:20.148349 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:25:20.148349 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:25:20.148349 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:25:20.148349 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:25:20.148349 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:25:20.148349 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:25:20.148349 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:25:20.148349 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:25:20.148349 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:25:20.148349 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:25:20.148349 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 03:25:20.577538 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 03:25:21.231876 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:25:21.231876 ignition[1014]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 03:25:21.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:21.267565 ignition[1014]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:25:21.267565 ignition[1014]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:25:21.267565 ignition[1014]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 03:25:21.267565 ignition[1014]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 03:25:21.267565 ignition[1014]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 03:25:21.267565 ignition[1014]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:25:21.267565 ignition[1014]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:25:21.267565 ignition[1014]: INFO : files: files passed Dec 16 03:25:21.267565 ignition[1014]: INFO : Ignition finished successfully Dec 16 03:25:21.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:21.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:21.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:21.241224 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 03:25:21.251113 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 03:25:21.277719 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 03:25:21.293859 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 03:25:21.465345 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:25:21.465345 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:25:21.293975 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 03:25:21.478540 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:25:21.372934 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:25:21.386859 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 03:25:21.419557 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 03:25:21.543895 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 03:25:21.544086 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 03:25:21.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:21.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:21.562014 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 03:25:21.580356 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 03:25:21.596934 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 03:25:21.598241 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 03:25:21.658355 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:25:21.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:21.660563 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 03:25:21.726581 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:25:21.726842 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:25:21.737580 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:25:21.747665 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 03:25:21.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:21.765616 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 03:25:21.765810 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:25:21.798629 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 03:25:21.807588 systemd[1]: Stopped target basic.target - Basic System. Dec 16 03:25:21.824635 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 03:25:21.840612 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:25:21.856620 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 03:25:21.873637 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:25:21.891605 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 03:25:21.908605 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:25:21.925650 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 03:25:21.942605 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 03:25:21.960596 systemd[1]: Stopped target swap.target - Swaps. Dec 16 03:25:21.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:21.976616 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 03:25:21.976839 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:25:22.004633 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:25:22.013641 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:25:22.030602 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 03:25:22.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.030777 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:25:22.048589 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 03:25:22.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.048776 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 03:25:22.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.084712 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 03:25:22.084925 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:25:22.096758 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 03:25:22.096952 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 03:25:22.115046 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 03:25:22.190346 ignition[1070]: INFO : Ignition 2.24.0 Dec 16 03:25:22.190346 ignition[1070]: INFO : Stage: umount Dec 16 03:25:22.190346 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:25:22.190346 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 16 03:25:22.190346 ignition[1070]: INFO : umount: umount passed Dec 16 03:25:22.190346 ignition[1070]: INFO : Ignition finished successfully Dec 16 03:25:22.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.146507 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 03:25:22.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.174389 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 03:25:22.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.174639 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:25:22.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.213543 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 03:25:22.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.213744 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:25:22.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.224600 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 03:25:22.224775 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:25:22.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.255177 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 03:25:22.256567 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 03:25:22.256691 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 03:25:22.260070 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 03:25:22.260227 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 03:25:22.289780 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 03:25:22.289890 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 03:25:22.304820 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 03:25:22.550345 kernel: kauditd_printk_skb: 24 callbacks suppressed Dec 16 03:25:22.550398 kernel: audit: type=1131 audit(1765855522.508:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.304925 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 03:25:22.587313 kernel: audit: type=1131 audit(1765855522.558:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.321420 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 03:25:22.625344 kernel: audit: type=1131 audit(1765855522.595:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.321490 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 03:25:22.341372 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 03:25:22.341453 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 03:25:22.695390 kernel: audit: type=1131 audit(1765855522.658:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.359381 systemd[1]: Stopped target network.target - Network. Dec 16 03:25:22.744641 kernel: audit: type=1131 audit(1765855522.703:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.744676 kernel: audit: type=1334 audit(1765855522.727:66): prog-id=9 op=UNLOAD Dec 16 03:25:22.744695 kernel: audit: type=1334 audit(1765855522.735:67): prog-id=6 op=UNLOAD Dec 16 03:25:22.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.727000 audit: BPF prog-id=9 op=UNLOAD Dec 16 03:25:22.735000 audit: BPF prog-id=6 op=UNLOAD Dec 16 03:25:22.375317 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 03:25:22.375497 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:25:22.384567 systemd[1]: Stopped target paths.target - Path Units. Dec 16 03:25:22.408289 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 03:25:22.412266 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:25:22.425309 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 03:25:22.853310 kernel: audit: type=1131 audit(1765855522.815:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.432475 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 03:25:22.889404 kernel: audit: type=1131 audit(1765855522.861:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.446501 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 03:25:22.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.446552 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:25:22.941314 kernel: audit: type=1131 audit(1765855522.897:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.460504 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 03:25:22.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.460552 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:25:22.475538 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 03:25:22.475578 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:25:22.491526 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 03:25:22.491609 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 03:25:23.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.509554 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 03:25:22.509633 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 03:25:23.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.579740 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 03:25:22.579820 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 03:25:22.616822 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 03:25:23.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.634530 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 03:25:22.644094 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 03:25:22.644232 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 03:25:23.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.682466 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 03:25:23.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.682589 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 03:25:23.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.737439 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 03:25:23.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.753383 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 03:25:23.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.753459 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:25:22.773484 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 03:25:23.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.781459 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 03:25:22.781527 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:25:23.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:23.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.816604 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 03:25:22.816687 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:25:22.862472 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 03:25:22.862553 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 03:25:22.898381 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:25:22.934969 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 03:25:22.935129 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:25:22.953319 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 03:25:23.353324 systemd-journald[292]: Received SIGTERM from PID 1 (systemd). Dec 16 03:25:22.953492 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 03:25:22.958566 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 03:25:22.958622 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:25:22.974492 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 03:25:22.974552 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:25:23.025474 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 03:25:23.025577 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 03:25:23.050447 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 03:25:23.050653 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:25:23.087820 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 03:25:23.104282 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 03:25:23.104378 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:25:23.121428 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 03:25:23.121510 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:25:23.130602 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 03:25:23.130684 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:25:23.147566 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 03:25:23.147621 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:25:23.175474 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:25:23.175537 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:25:23.195449 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 03:25:23.195563 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 03:25:23.220819 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 03:25:23.220927 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 03:25:23.247520 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 03:25:23.266488 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 03:25:23.305009 systemd[1]: Switching root. Dec 16 03:25:23.603379 systemd-journald[292]: Journal stopped Dec 16 03:25:26.208352 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 03:25:26.208388 kernel: SELinux: policy capability open_perms=1 Dec 16 03:25:26.208407 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 03:25:26.208419 kernel: SELinux: policy capability always_check_network=0 Dec 16 03:25:26.208431 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 03:25:26.208443 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 03:25:26.208457 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 03:25:26.208472 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 03:25:26.208484 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 03:25:26.208498 systemd[1]: Successfully loaded SELinux policy in 110.662ms. Dec 16 03:25:26.208513 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.074ms. Dec 16 03:25:26.208528 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:25:26.208541 systemd[1]: Detected virtualization google. Dec 16 03:25:26.208557 systemd[1]: Detected architecture x86-64. Dec 16 03:25:26.208571 systemd[1]: Detected first boot. Dec 16 03:25:26.208585 systemd[1]: Initializing machine ID from random generator. Dec 16 03:25:26.208601 kernel: Guest personality initialized and is inactive Dec 16 03:25:26.208616 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 03:25:26.208629 kernel: Initialized host personality Dec 16 03:25:26.208642 zram_generator::config[1112]: No configuration found. Dec 16 03:25:26.208657 kernel: NET: Registered PF_VSOCK protocol family Dec 16 03:25:26.208670 systemd[1]: Populated /etc with preset unit settings. Dec 16 03:25:26.208683 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 03:25:26.208697 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 03:25:26.208713 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 03:25:26.208733 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 03:25:26.208747 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 03:25:26.208761 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 03:25:26.208775 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 03:25:26.208792 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 03:25:26.208806 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 03:25:26.208820 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 03:25:26.208834 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 03:25:26.208848 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:25:26.208862 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:25:26.208876 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 03:25:26.208892 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 03:25:26.208908 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 03:25:26.208922 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:25:26.208936 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 03:25:26.208954 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:25:26.208970 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:25:26.208988 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 03:25:26.209002 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 03:25:26.209017 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 03:25:26.209031 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 03:25:26.209045 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:25:26.209059 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:25:26.209074 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 03:25:26.209091 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:25:26.209105 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:25:26.209119 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 03:25:26.209133 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 03:25:26.209147 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 03:25:26.209165 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:25:26.209179 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 03:25:26.209254 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:25:26.209279 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 03:25:26.209304 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 03:25:26.209334 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:25:26.209358 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:25:26.209383 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 03:25:26.209407 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 03:25:26.209430 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 03:25:26.209452 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 03:25:26.209475 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:25:26.209505 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 03:25:26.209528 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 03:25:26.209553 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 03:25:26.209579 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 03:25:26.209603 systemd[1]: Reached target machines.target - Containers. Dec 16 03:25:26.209627 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 03:25:26.209655 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:25:26.209680 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:25:26.209704 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 03:25:26.209727 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:25:26.209752 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:25:26.209776 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:25:26.209800 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 03:25:26.209830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:25:26.209869 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 03:25:26.209893 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 03:25:26.209917 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 03:25:26.209941 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 03:25:26.209964 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 03:25:26.209987 kernel: ACPI: bus type drm_connector registered Dec 16 03:25:26.210015 kernel: fuse: init (API version 7.41) Dec 16 03:25:26.210038 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:25:26.210063 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:25:26.210086 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:25:26.210109 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:25:26.210169 systemd-journald[1201]: Collecting audit messages is enabled. Dec 16 03:25:26.210243 systemd-journald[1201]: Journal started Dec 16 03:25:26.210290 systemd-journald[1201]: Runtime Journal (/run/log/journal/c347acc96296426fa3605bbc8f95dc45) is 8M, max 148.4M, 140.4M free. Dec 16 03:25:25.533000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 03:25:26.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.133000 audit: BPF prog-id=14 op=UNLOAD Dec 16 03:25:26.133000 audit: BPF prog-id=13 op=UNLOAD Dec 16 03:25:26.134000 audit: BPF prog-id=15 op=LOAD Dec 16 03:25:26.134000 audit: BPF prog-id=16 op=LOAD Dec 16 03:25:26.134000 audit: BPF prog-id=17 op=LOAD Dec 16 03:25:26.203000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 03:25:26.203000 audit[1201]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff54ed6c00 a2=4000 a3=0 items=0 ppid=1 pid=1201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:26.203000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 03:25:24.947628 systemd[1]: Queued start job for default target multi-user.target. Dec 16 03:25:24.960965 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 03:25:24.961731 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 03:25:26.222224 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 03:25:26.235245 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 03:25:26.265254 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:25:26.293266 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:25:26.305222 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:25:26.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.315868 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 03:25:26.324499 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 03:25:26.333503 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 03:25:26.342475 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 03:25:26.351490 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 03:25:26.360477 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 03:25:26.369780 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 03:25:26.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.380767 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:25:26.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.391699 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 03:25:26.391955 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 03:25:26.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.402656 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:25:26.402901 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:25:26.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.413717 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:25:26.413985 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:25:26.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.422662 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:25:26.422905 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:25:26.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.433659 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 03:25:26.433910 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 03:25:26.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.442641 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:25:26.442890 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:25:26.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.451695 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:25:26.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.460758 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:25:26.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.472618 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 03:25:26.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.482759 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 03:25:26.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.494487 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:25:26.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.517419 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:25:26.527643 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 03:25:26.539838 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 03:25:26.556348 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 03:25:26.565328 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 03:25:26.565509 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:25:26.575504 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 03:25:26.586069 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:25:26.586313 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:25:26.587786 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 03:25:26.603031 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 03:25:26.613496 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:25:26.621274 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 03:25:26.630629 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:25:26.633668 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:25:26.638206 systemd-journald[1201]: Time spent on flushing to /var/log/journal/c347acc96296426fa3605bbc8f95dc45 is 100.296ms for 1088 entries. Dec 16 03:25:26.638206 systemd-journald[1201]: System Journal (/var/log/journal/c347acc96296426fa3605bbc8f95dc45) is 8M, max 588.1M, 580.1M free. Dec 16 03:25:26.770530 systemd-journald[1201]: Received client request to flush runtime journal. Dec 16 03:25:26.770599 kernel: loop1: detected capacity change from 0 to 111560 Dec 16 03:25:26.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.653607 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 03:25:26.665548 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:25:26.679626 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 03:25:26.689536 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 03:25:26.699809 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 03:25:26.712540 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 03:25:26.724575 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 03:25:26.736416 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:25:26.779076 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 03:25:26.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.791276 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 16 03:25:26.791313 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 16 03:25:26.802414 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 03:25:26.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.813563 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:25:26.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.831454 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 03:25:26.844682 kernel: loop2: detected capacity change from 0 to 229808 Dec 16 03:25:26.899315 kernel: loop3: detected capacity change from 0 to 50784 Dec 16 03:25:26.913061 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 03:25:26.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:26.922000 audit: BPF prog-id=18 op=LOAD Dec 16 03:25:26.923000 audit: BPF prog-id=19 op=LOAD Dec 16 03:25:26.923000 audit: BPF prog-id=20 op=LOAD Dec 16 03:25:26.926457 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 03:25:26.936000 audit: BPF prog-id=21 op=LOAD Dec 16 03:25:26.939529 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:25:26.950528 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:25:26.962000 audit: BPF prog-id=22 op=LOAD Dec 16 03:25:26.962000 audit: BPF prog-id=23 op=LOAD Dec 16 03:25:26.962000 audit: BPF prog-id=24 op=LOAD Dec 16 03:25:26.966441 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 03:25:26.979097 kernel: loop4: detected capacity change from 0 to 49888 Dec 16 03:25:26.981000 audit: BPF prog-id=25 op=LOAD Dec 16 03:25:26.983000 audit: BPF prog-id=26 op=LOAD Dec 16 03:25:26.983000 audit: BPF prog-id=27 op=LOAD Dec 16 03:25:26.986851 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 03:25:27.044967 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Dec 16 03:25:27.047255 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Dec 16 03:25:27.051222 kernel: loop5: detected capacity change from 0 to 111560 Dec 16 03:25:27.063466 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:25:27.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:27.101215 kernel: loop6: detected capacity change from 0 to 229808 Dec 16 03:25:27.122096 systemd-nsresourced[1259]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 03:25:27.132740 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 03:25:27.137442 kernel: loop7: detected capacity change from 0 to 50784 Dec 16 03:25:27.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:27.168036 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 03:25:27.182590 kernel: loop1: detected capacity change from 0 to 49888 Dec 16 03:25:27.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:27.209791 (sd-merge)[1263]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-gce.raw'. Dec 16 03:25:27.220846 (sd-merge)[1263]: Merged extensions into '/usr'. Dec 16 03:25:27.231260 systemd[1]: Reload requested from client PID 1236 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 03:25:27.231291 systemd[1]: Reloading... Dec 16 03:25:27.415234 zram_generator::config[1306]: No configuration found. Dec 16 03:25:27.489034 systemd-oomd[1255]: No swap; memory pressure usage will be degraded Dec 16 03:25:27.526047 systemd-resolved[1256]: Positive Trust Anchors: Dec 16 03:25:27.527678 systemd-resolved[1256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:25:27.527778 systemd-resolved[1256]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:25:27.528205 systemd-resolved[1256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:25:27.544479 systemd-resolved[1256]: Defaulting to hostname 'linux'. Dec 16 03:25:27.865247 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 03:25:27.865430 systemd[1]: Reloading finished in 632 ms. Dec 16 03:25:27.890125 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 03:25:27.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:27.901693 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:25:27.906716 kernel: kauditd_printk_skb: 78 callbacks suppressed Dec 16 03:25:27.906784 kernel: audit: type=1130 audit(1765855527.900:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:27.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:27.937768 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 03:25:27.960225 kernel: audit: type=1130 audit(1765855527.936:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:27.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:27.969808 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 03:25:27.992242 kernel: audit: type=1130 audit(1765855527.968:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:28.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:28.009384 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:25:28.026235 kernel: audit: type=1130 audit(1765855528.002:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:28.041683 systemd[1]: Starting ensure-sysext.service... Dec 16 03:25:28.055301 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:25:28.064000 audit: BPF prog-id=8 op=UNLOAD Dec 16 03:25:28.069417 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:25:28.073215 kernel: audit: type=1334 audit(1765855528.064:151): prog-id=8 op=UNLOAD Dec 16 03:25:28.064000 audit: BPF prog-id=7 op=UNLOAD Dec 16 03:25:28.065000 audit: BPF prog-id=28 op=LOAD Dec 16 03:25:28.096476 kernel: audit: type=1334 audit(1765855528.064:152): prog-id=7 op=UNLOAD Dec 16 03:25:28.096547 kernel: audit: type=1334 audit(1765855528.065:153): prog-id=28 op=LOAD Dec 16 03:25:28.096583 kernel: audit: type=1334 audit(1765855528.065:154): prog-id=29 op=LOAD Dec 16 03:25:28.065000 audit: BPF prog-id=29 op=LOAD Dec 16 03:25:28.118214 kernel: audit: type=1334 audit(1765855528.105:155): prog-id=30 op=LOAD Dec 16 03:25:28.105000 audit: BPF prog-id=30 op=LOAD Dec 16 03:25:28.116316 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 03:25:28.116365 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 03:25:28.116879 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 03:25:28.105000 audit: BPF prog-id=21 op=UNLOAD Dec 16 03:25:28.120607 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Dec 16 03:25:28.120719 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Dec 16 03:25:28.113000 audit: BPF prog-id=31 op=LOAD Dec 16 03:25:28.113000 audit: BPF prog-id=15 op=UNLOAD Dec 16 03:25:28.113000 audit: BPF prog-id=32 op=LOAD Dec 16 03:25:28.114000 audit: BPF prog-id=33 op=LOAD Dec 16 03:25:28.114000 audit: BPF prog-id=16 op=UNLOAD Dec 16 03:25:28.114000 audit: BPF prog-id=17 op=UNLOAD Dec 16 03:25:28.115000 audit: BPF prog-id=34 op=LOAD Dec 16 03:25:28.115000 audit: BPF prog-id=18 op=UNLOAD Dec 16 03:25:28.115000 audit: BPF prog-id=35 op=LOAD Dec 16 03:25:28.115000 audit: BPF prog-id=36 op=LOAD Dec 16 03:25:28.128243 kernel: audit: type=1334 audit(1765855528.105:156): prog-id=21 op=UNLOAD Dec 16 03:25:28.115000 audit: BPF prog-id=19 op=UNLOAD Dec 16 03:25:28.115000 audit: BPF prog-id=20 op=UNLOAD Dec 16 03:25:28.119000 audit: BPF prog-id=37 op=LOAD Dec 16 03:25:28.129000 audit: BPF prog-id=22 op=UNLOAD Dec 16 03:25:28.129000 audit: BPF prog-id=38 op=LOAD Dec 16 03:25:28.129000 audit: BPF prog-id=39 op=LOAD Dec 16 03:25:28.129000 audit: BPF prog-id=23 op=UNLOAD Dec 16 03:25:28.129000 audit: BPF prog-id=24 op=UNLOAD Dec 16 03:25:28.130000 audit: BPF prog-id=40 op=LOAD Dec 16 03:25:28.130000 audit: BPF prog-id=25 op=UNLOAD Dec 16 03:25:28.130000 audit: BPF prog-id=41 op=LOAD Dec 16 03:25:28.130000 audit: BPF prog-id=42 op=LOAD Dec 16 03:25:28.130000 audit: BPF prog-id=26 op=UNLOAD Dec 16 03:25:28.130000 audit: BPF prog-id=27 op=UNLOAD Dec 16 03:25:28.138272 systemd-tmpfiles[1347]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:25:28.138293 systemd-tmpfiles[1347]: Skipping /boot Dec 16 03:25:28.147352 systemd[1]: Reload requested from client PID 1346 ('systemctl') (unit ensure-sysext.service)... Dec 16 03:25:28.147389 systemd[1]: Reloading... Dec 16 03:25:28.175722 systemd-udevd[1348]: Using default interface naming scheme 'v257'. Dec 16 03:25:28.187972 systemd-tmpfiles[1347]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:25:28.188598 systemd-tmpfiles[1347]: Skipping /boot Dec 16 03:25:28.296218 zram_generator::config[1384]: No configuration found. Dec 16 03:25:28.530221 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 03:25:28.560337 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 03:25:28.643220 kernel: ACPI: button: Power Button [PWRF] Dec 16 03:25:28.652228 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 16 03:25:28.660217 kernel: ACPI: button: Sleep Button [SLPF] Dec 16 03:25:28.787220 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 16 03:25:28.882229 kernel: EDAC MC: Ver: 3.0.0 Dec 16 03:25:28.949398 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 16 03:25:28.961971 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 03:25:28.963562 systemd[1]: Reloading finished in 815 ms. Dec 16 03:25:28.983345 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:25:28.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:29.000000 audit: BPF prog-id=43 op=LOAD Dec 16 03:25:29.000000 audit: BPF prog-id=37 op=UNLOAD Dec 16 03:25:29.000000 audit: BPF prog-id=44 op=LOAD Dec 16 03:25:29.000000 audit: BPF prog-id=45 op=LOAD Dec 16 03:25:29.000000 audit: BPF prog-id=38 op=UNLOAD Dec 16 03:25:29.000000 audit: BPF prog-id=39 op=UNLOAD Dec 16 03:25:29.002000 audit: BPF prog-id=46 op=LOAD Dec 16 03:25:29.002000 audit: BPF prog-id=31 op=UNLOAD Dec 16 03:25:29.002000 audit: BPF prog-id=47 op=LOAD Dec 16 03:25:29.003000 audit: BPF prog-id=48 op=LOAD Dec 16 03:25:29.003000 audit: BPF prog-id=32 op=UNLOAD Dec 16 03:25:29.003000 audit: BPF prog-id=33 op=UNLOAD Dec 16 03:25:29.006000 audit: BPF prog-id=49 op=LOAD Dec 16 03:25:29.006000 audit: BPF prog-id=50 op=LOAD Dec 16 03:25:29.006000 audit: BPF prog-id=28 op=UNLOAD Dec 16 03:25:29.006000 audit: BPF prog-id=29 op=UNLOAD Dec 16 03:25:29.007000 audit: BPF prog-id=51 op=LOAD Dec 16 03:25:29.007000 audit: BPF prog-id=30 op=UNLOAD Dec 16 03:25:29.013000 audit: BPF prog-id=52 op=LOAD Dec 16 03:25:29.015000 audit: BPF prog-id=34 op=UNLOAD Dec 16 03:25:29.015000 audit: BPF prog-id=53 op=LOAD Dec 16 03:25:29.015000 audit: BPF prog-id=54 op=LOAD Dec 16 03:25:29.015000 audit: BPF prog-id=35 op=UNLOAD Dec 16 03:25:29.015000 audit: BPF prog-id=36 op=UNLOAD Dec 16 03:25:29.018000 audit: BPF prog-id=55 op=LOAD Dec 16 03:25:29.018000 audit: BPF prog-id=40 op=UNLOAD Dec 16 03:25:29.018000 audit: BPF prog-id=56 op=LOAD Dec 16 03:25:29.018000 audit: BPF prog-id=57 op=LOAD Dec 16 03:25:29.018000 audit: BPF prog-id=41 op=UNLOAD Dec 16 03:25:29.018000 audit: BPF prog-id=42 op=UNLOAD Dec 16 03:25:29.031429 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:25:29.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:29.080387 systemd[1]: Finished ensure-sysext.service. Dec 16 03:25:29.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:29.116325 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Dec 16 03:25:29.125429 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:25:29.126913 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:25:29.145755 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 03:25:29.156792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:25:29.163845 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:25:29.174415 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:25:29.188732 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:25:29.206060 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:25:29.218548 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 03:25:29.226962 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:25:29.227208 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:25:29.232000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 03:25:29.232000 audit[1494]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd1653d520 a2=420 a3=0 items=0 ppid=1467 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:29.232000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:25:29.233975 augenrules[1494]: No rules Dec 16 03:25:29.236435 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 03:25:29.247787 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 03:25:29.256900 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:25:29.267339 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 03:25:29.284671 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:25:29.284831 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 03:25:29.287460 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 03:25:29.292576 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:25:29.292800 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:25:29.296570 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:25:29.297695 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:25:29.298831 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:25:29.299406 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:25:29.299982 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:25:29.300356 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:25:29.300914 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:25:29.303119 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:25:29.304845 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:25:29.305175 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:25:29.323730 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:25:29.323993 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:25:29.363093 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 03:25:29.395775 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 03:25:29.406511 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Dec 16 03:25:29.417563 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 03:25:29.443277 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 03:25:29.453894 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 03:25:29.458721 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 03:25:29.506332 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Dec 16 03:25:29.559822 systemd-networkd[1501]: lo: Link UP Dec 16 03:25:29.559837 systemd-networkd[1501]: lo: Gained carrier Dec 16 03:25:29.561095 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:25:29.562536 systemd-networkd[1501]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:25:29.562551 systemd-networkd[1501]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:25:29.563737 systemd-networkd[1501]: eth0: Link UP Dec 16 03:25:29.564331 systemd-networkd[1501]: eth0: Gained carrier Dec 16 03:25:29.564361 systemd-networkd[1501]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:25:29.574040 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:25:29.574346 systemd-networkd[1501]: eth0: DHCPv4 address 10.128.0.16/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 16 03:25:29.584101 systemd[1]: Reached target network.target - Network. Dec 16 03:25:29.594479 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 03:25:29.606807 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 03:25:29.732754 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 03:25:30.097678 ldconfig[1493]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 03:25:30.102965 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 03:25:30.114268 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 03:25:30.137412 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 03:25:30.147074 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:25:30.156543 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 03:25:30.166373 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 03:25:30.176482 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 03:25:30.187487 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 03:25:30.196597 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 03:25:30.206425 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 03:25:30.216529 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 03:25:30.225296 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 03:25:30.235289 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 03:25:30.235337 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:25:30.242291 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:25:30.252837 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 03:25:30.263804 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 03:25:30.273943 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 03:25:30.284445 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 03:25:30.294291 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 03:25:30.313958 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 03:25:30.322723 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 03:25:30.334185 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 03:25:30.344651 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:25:30.353317 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:25:30.362608 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:25:30.362729 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:25:30.364087 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 03:25:30.381849 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 03:25:30.404672 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 03:25:30.418145 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 03:25:30.431441 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 03:25:30.435719 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 03:25:30.453357 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 03:25:30.458377 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 03:25:30.469015 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 03:25:30.476659 jq[1547]: false Dec 16 03:25:30.481095 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 03:25:30.493446 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 03:25:30.503534 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Refreshing passwd entry cache Dec 16 03:25:30.501044 oslogin_cache_refresh[1549]: Refreshing passwd entry cache Dec 16 03:25:30.505515 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 03:25:30.518514 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 03:25:30.521901 coreos-metadata[1544]: Dec 16 03:25:30.521 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Dec 16 03:25:30.526727 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Failure getting users, quitting Dec 16 03:25:30.526727 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:25:30.526727 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Refreshing group entry cache Dec 16 03:25:30.525481 oslogin_cache_refresh[1549]: Failure getting users, quitting Dec 16 03:25:30.525519 oslogin_cache_refresh[1549]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:25:30.525646 oslogin_cache_refresh[1549]: Refreshing group entry cache Dec 16 03:25:30.531851 coreos-metadata[1544]: Dec 16 03:25:30.531 INFO Fetch successful Dec 16 03:25:30.531851 coreos-metadata[1544]: Dec 16 03:25:30.531 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Dec 16 03:25:30.532136 coreos-metadata[1544]: Dec 16 03:25:30.532 INFO Fetch successful Dec 16 03:25:30.537514 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 03:25:30.537869 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Failure getting groups, quitting Dec 16 03:25:30.537869 google_oslogin_nss_cache[1549]: oslogin_cache_refresh[1549]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:25:30.537964 coreos-metadata[1544]: Dec 16 03:25:30.536 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Dec 16 03:25:30.535013 oslogin_cache_refresh[1549]: Failure getting groups, quitting Dec 16 03:25:30.535047 oslogin_cache_refresh[1549]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:25:30.539602 coreos-metadata[1544]: Dec 16 03:25:30.539 INFO Fetch successful Dec 16 03:25:30.539602 coreos-metadata[1544]: Dec 16 03:25:30.539 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Dec 16 03:25:30.542225 coreos-metadata[1544]: Dec 16 03:25:30.540 INFO Fetch successful Dec 16 03:25:30.546342 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 16 03:25:30.548127 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 03:25:30.551047 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 03:25:30.556852 extend-filesystems[1548]: Found /dev/sda6 Dec 16 03:25:30.569361 extend-filesystems[1548]: Found /dev/sda9 Dec 16 03:25:30.563539 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 03:25:30.588623 extend-filesystems[1548]: Checking size of /dev/sda9 Dec 16 03:25:30.605824 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 03:25:30.617638 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 03:25:30.618440 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 03:25:30.618930 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 03:25:30.619944 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 03:25:30.627516 ntpd[1553]: ntpd 4.2.8p18@1.4062-o Mon Dec 15 23:46:33 UTC 2025 (1): Starting Dec 16 03:25:30.631154 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: ntpd 4.2.8p18@1.4062-o Mon Dec 15 23:46:33 UTC 2025 (1): Starting Dec 16 03:25:30.631154 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 03:25:30.631154 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: ---------------------------------------------------- Dec 16 03:25:30.631154 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: ntp-4 is maintained by Network Time Foundation, Dec 16 03:25:30.631154 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 03:25:30.631154 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: corporation. Support and training for ntp-4 are Dec 16 03:25:30.631154 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: available at https://www.nwtime.org/support Dec 16 03:25:30.631154 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: ---------------------------------------------------- Dec 16 03:25:30.630817 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 03:25:30.627597 ntpd[1553]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 03:25:30.632491 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 03:25:30.627612 ntpd[1553]: ---------------------------------------------------- Dec 16 03:25:30.627625 ntpd[1553]: ntp-4 is maintained by Network Time Foundation, Dec 16 03:25:30.627638 ntpd[1553]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 03:25:30.627651 ntpd[1553]: corporation. Support and training for ntp-4 are Dec 16 03:25:30.627665 ntpd[1553]: available at https://www.nwtime.org/support Dec 16 03:25:30.627678 ntpd[1553]: ---------------------------------------------------- Dec 16 03:25:30.640752 ntpd[1553]: proto: precision = 0.104 usec (-23) Dec 16 03:25:30.646649 jq[1570]: true Dec 16 03:25:30.646889 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: proto: precision = 0.104 usec (-23) Dec 16 03:25:30.646889 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: basedate set to 2025-12-03 Dec 16 03:25:30.646889 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: gps base set to 2025-12-07 (week 2396) Dec 16 03:25:30.646889 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 03:25:30.646889 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 03:25:30.646889 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 03:25:30.646889 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: Listen normally on 3 eth0 10.128.0.16:123 Dec 16 03:25:30.646889 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: Listen normally on 4 lo [::1]:123 Dec 16 03:25:30.646889 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: bind(21) AF_INET6 [fe80::4001:aff:fe80:10%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 03:25:30.646889 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:10%2]:123 Dec 16 03:25:30.646889 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: cannot bind address fe80::4001:aff:fe80:10%2 Dec 16 03:25:30.646889 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: Listening on routing socket on fd #21 for interface updates Dec 16 03:25:30.644522 ntpd[1553]: basedate set to 2025-12-03 Dec 16 03:25:30.648775 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 03:25:30.644543 ntpd[1553]: gps base set to 2025-12-07 (week 2396) Dec 16 03:25:30.644714 ntpd[1553]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 03:25:30.644752 ntpd[1553]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 03:25:30.644982 ntpd[1553]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 03:25:30.645018 ntpd[1553]: Listen normally on 3 eth0 10.128.0.16:123 Dec 16 03:25:30.645056 ntpd[1553]: Listen normally on 4 lo [::1]:123 Dec 16 03:25:30.645098 ntpd[1553]: bind(21) AF_INET6 [fe80::4001:aff:fe80:10%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 03:25:30.645125 ntpd[1553]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:10%2]:123 Dec 16 03:25:30.645145 ntpd[1553]: cannot bind address fe80::4001:aff:fe80:10%2 Dec 16 03:25:30.645183 ntpd[1553]: Listening on routing socket on fd #21 for interface updates Dec 16 03:25:30.656457 extend-filesystems[1548]: Resized partition /dev/sda9 Dec 16 03:25:30.656835 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 03:25:30.664703 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 03:25:30.664703 ntpd[1553]: 16 Dec 03:25:30 ntpd[1553]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 03:25:30.663008 ntpd[1553]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 03:25:30.663044 ntpd[1553]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 03:25:30.670233 extend-filesystems[1589]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 03:25:30.694408 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2604027 blocks Dec 16 03:25:30.694455 update_engine[1567]: I20251216 03:25:30.670214 1567 main.cc:92] Flatcar Update Engine starting Dec 16 03:25:30.743335 jq[1590]: true Dec 16 03:25:30.744054 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 03:25:30.766221 kernel: EXT4-fs (sda9): resized filesystem to 2604027 Dec 16 03:25:30.784924 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 03:25:30.795679 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 03:25:30.830434 extend-filesystems[1589]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 03:25:30.830434 extend-filesystems[1589]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 03:25:30.830434 extend-filesystems[1589]: The filesystem on /dev/sda9 is now 2604027 (4k) blocks long. Dec 16 03:25:30.831302 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 03:25:30.873541 extend-filesystems[1548]: Resized filesystem in /dev/sda9 Dec 16 03:25:30.833398 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 03:25:30.897666 tar[1587]: linux-amd64/LICENSE Dec 16 03:25:30.897666 tar[1587]: linux-amd64/helm Dec 16 03:25:31.052692 systemd-logind[1566]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 03:25:31.052732 systemd-logind[1566]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 16 03:25:31.052774 systemd-logind[1566]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 03:25:31.053877 systemd-logind[1566]: New seat seat0. Dec 16 03:25:31.055295 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 03:25:31.062553 bash[1628]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:25:31.065038 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 03:25:31.088692 systemd[1]: Starting sshkeys.service... Dec 16 03:25:31.099798 dbus-daemon[1545]: [system] SELinux support is enabled Dec 16 03:25:31.104452 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 03:25:31.120395 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 03:25:31.120434 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 03:25:31.127858 update_engine[1567]: I20251216 03:25:31.127631 1567 update_check_scheduler.cc:74] Next update check in 3m59s Dec 16 03:25:31.127941 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 03:25:31.128274 dbus-daemon[1545]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1501 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 03:25:31.130361 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 03:25:31.130396 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 03:25:31.144699 systemd[1]: Started update-engine.service - Update Engine. Dec 16 03:25:31.148012 dbus-daemon[1545]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 03:25:31.163941 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 03:25:31.184687 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 03:25:31.207248 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 03:25:31.222272 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 03:25:31.233480 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 03:25:31.270374 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 03:25:31.281774 systemd[1]: Started sshd@0-10.128.0.16:22-147.75.109.163:55760.service - OpenSSH per-connection server daemon (147.75.109.163:55760). Dec 16 03:25:31.346723 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 03:25:31.348345 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 03:25:31.364783 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 03:25:31.409507 coreos-metadata[1645]: Dec 16 03:25:31.408 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 16 03:25:31.418543 coreos-metadata[1645]: Dec 16 03:25:31.414 INFO Fetch failed with 404: resource not found Dec 16 03:25:31.418543 coreos-metadata[1645]: Dec 16 03:25:31.414 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 16 03:25:31.418543 coreos-metadata[1645]: Dec 16 03:25:31.416 INFO Fetch successful Dec 16 03:25:31.418543 coreos-metadata[1645]: Dec 16 03:25:31.416 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 16 03:25:31.418543 coreos-metadata[1645]: Dec 16 03:25:31.418 INFO Fetch failed with 404: resource not found Dec 16 03:25:31.418543 coreos-metadata[1645]: Dec 16 03:25:31.418 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 16 03:25:31.419180 coreos-metadata[1645]: Dec 16 03:25:31.419 INFO Fetch failed with 404: resource not found Dec 16 03:25:31.419180 coreos-metadata[1645]: Dec 16 03:25:31.419 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 16 03:25:31.420140 coreos-metadata[1645]: Dec 16 03:25:31.419 INFO Fetch successful Dec 16 03:25:31.449707 unknown[1645]: wrote ssh authorized keys file for user: core Dec 16 03:25:31.485710 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 03:25:31.505403 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 03:25:31.515999 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 03:25:31.526833 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 03:25:31.535813 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 03:25:31.541637 dbus-daemon[1545]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 03:25:31.543648 dbus-daemon[1545]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1641 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 03:25:31.569001 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 03:25:31.580324 update-ssh-keys[1660]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:25:31.581523 locksmithd[1643]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 03:25:31.581750 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 03:25:31.600642 systemd[1]: Finished sshkeys.service. Dec 16 03:25:31.604507 systemd-networkd[1501]: eth0: Gained IPv6LL Dec 16 03:25:31.617386 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 03:25:31.629395 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 03:25:31.637711 containerd[1591]: time="2025-12-16T03:25:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 03:25:31.642423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:25:31.644232 containerd[1591]: time="2025-12-16T03:25:31.643818685Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 03:25:31.655514 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 03:25:31.673941 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Dec 16 03:25:31.676975 containerd[1591]: time="2025-12-16T03:25:31.675370075Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.073µs" Dec 16 03:25:31.676975 containerd[1591]: time="2025-12-16T03:25:31.675416662Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 03:25:31.676975 containerd[1591]: time="2025-12-16T03:25:31.675614828Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 03:25:31.676975 containerd[1591]: time="2025-12-16T03:25:31.675643703Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 03:25:31.676975 containerd[1591]: time="2025-12-16T03:25:31.675884581Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 03:25:31.676975 containerd[1591]: time="2025-12-16T03:25:31.675914552Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:25:31.676975 containerd[1591]: time="2025-12-16T03:25:31.676014863Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:25:31.676975 containerd[1591]: time="2025-12-16T03:25:31.676034973Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:25:31.679608 containerd[1591]: time="2025-12-16T03:25:31.678519334Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:25:31.679608 containerd[1591]: time="2025-12-16T03:25:31.678556656Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:25:31.679608 containerd[1591]: time="2025-12-16T03:25:31.678581956Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:25:31.679608 containerd[1591]: time="2025-12-16T03:25:31.678601837Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:25:31.679608 containerd[1591]: time="2025-12-16T03:25:31.678916437Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:25:31.679608 containerd[1591]: time="2025-12-16T03:25:31.678942514Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 03:25:31.679608 containerd[1591]: time="2025-12-16T03:25:31.679091006Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 03:25:31.679608 containerd[1591]: time="2025-12-16T03:25:31.679453055Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:25:31.679608 containerd[1591]: time="2025-12-16T03:25:31.679514294Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:25:31.679608 containerd[1591]: time="2025-12-16T03:25:31.679534935Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 03:25:31.679608 containerd[1591]: time="2025-12-16T03:25:31.679587372Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 03:25:31.680210 containerd[1591]: time="2025-12-16T03:25:31.680020437Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 03:25:31.680210 containerd[1591]: time="2025-12-16T03:25:31.680121872Z" level=info msg="metadata content store policy set" policy=shared Dec 16 03:25:31.695311 containerd[1591]: time="2025-12-16T03:25:31.689680057Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 03:25:31.695311 containerd[1591]: time="2025-12-16T03:25:31.689929936Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:25:31.695311 containerd[1591]: time="2025-12-16T03:25:31.690068137Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:25:31.695311 containerd[1591]: time="2025-12-16T03:25:31.690094030Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 03:25:31.695311 containerd[1591]: time="2025-12-16T03:25:31.690143517Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 03:25:31.695311 containerd[1591]: time="2025-12-16T03:25:31.690170467Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 03:25:31.695311 containerd[1591]: time="2025-12-16T03:25:31.690231778Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 03:25:31.695311 containerd[1591]: time="2025-12-16T03:25:31.690254895Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 03:25:31.695311 containerd[1591]: time="2025-12-16T03:25:31.690278252Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 03:25:31.695311 containerd[1591]: time="2025-12-16T03:25:31.690301964Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 03:25:31.695311 containerd[1591]: time="2025-12-16T03:25:31.690325788Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 03:25:31.695311 containerd[1591]: time="2025-12-16T03:25:31.690347535Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 03:25:31.695311 containerd[1591]: time="2025-12-16T03:25:31.690366398Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 03:25:31.695311 containerd[1591]: time="2025-12-16T03:25:31.690386629Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 03:25:31.695950 containerd[1591]: time="2025-12-16T03:25:31.694321389Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 03:25:31.695950 containerd[1591]: time="2025-12-16T03:25:31.694393491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 03:25:31.695950 containerd[1591]: time="2025-12-16T03:25:31.695046506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 03:25:31.695950 containerd[1591]: time="2025-12-16T03:25:31.695085446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 03:25:31.695950 containerd[1591]: time="2025-12-16T03:25:31.695265460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 03:25:31.698872 containerd[1591]: time="2025-12-16T03:25:31.695290901Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 03:25:31.698872 containerd[1591]: time="2025-12-16T03:25:31.696296814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 03:25:31.698872 containerd[1591]: time="2025-12-16T03:25:31.698266315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 03:25:31.698872 containerd[1591]: time="2025-12-16T03:25:31.698321096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 03:25:31.698872 containerd[1591]: time="2025-12-16T03:25:31.698345830Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 03:25:31.702188 containerd[1591]: time="2025-12-16T03:25:31.698366746Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 03:25:31.702188 containerd[1591]: time="2025-12-16T03:25:31.701526150Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 03:25:31.702188 containerd[1591]: time="2025-12-16T03:25:31.701766057Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 03:25:31.702188 containerd[1591]: time="2025-12-16T03:25:31.701801001Z" level=info msg="Start snapshots syncer" Dec 16 03:25:31.702911 containerd[1591]: time="2025-12-16T03:25:31.702464716Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 03:25:31.707411 containerd[1591]: time="2025-12-16T03:25:31.706884688Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 03:25:31.715988 containerd[1591]: time="2025-12-16T03:25:31.712810035Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 03:25:31.715988 containerd[1591]: time="2025-12-16T03:25:31.712938716Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 03:25:31.715988 containerd[1591]: time="2025-12-16T03:25:31.715976201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 03:25:31.716174 containerd[1591]: time="2025-12-16T03:25:31.716043634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 03:25:31.716174 containerd[1591]: time="2025-12-16T03:25:31.716087539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 03:25:31.716174 containerd[1591]: time="2025-12-16T03:25:31.716134142Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 03:25:31.716174 containerd[1591]: time="2025-12-16T03:25:31.716158951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 03:25:31.716441 containerd[1591]: time="2025-12-16T03:25:31.716180052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 03:25:31.716441 containerd[1591]: time="2025-12-16T03:25:31.716237704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 03:25:31.716441 containerd[1591]: time="2025-12-16T03:25:31.716260314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 03:25:31.716441 containerd[1591]: time="2025-12-16T03:25:31.716308057Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 03:25:31.716441 containerd[1591]: time="2025-12-16T03:25:31.716389182Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:25:31.716441 containerd[1591]: time="2025-12-16T03:25:31.716418191Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:25:31.716441 containerd[1591]: time="2025-12-16T03:25:31.716435802Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:25:31.716745 containerd[1591]: time="2025-12-16T03:25:31.716478422Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:25:31.716745 containerd[1591]: time="2025-12-16T03:25:31.716497404Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 03:25:31.716745 containerd[1591]: time="2025-12-16T03:25:31.716560431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 03:25:31.716745 containerd[1591]: time="2025-12-16T03:25:31.716583804Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 03:25:31.716745 containerd[1591]: time="2025-12-16T03:25:31.716606391Z" level=info msg="runtime interface created" Dec 16 03:25:31.716745 containerd[1591]: time="2025-12-16T03:25:31.716639481Z" level=info msg="created NRI interface" Dec 16 03:25:31.716745 containerd[1591]: time="2025-12-16T03:25:31.716657187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 03:25:31.716745 containerd[1591]: time="2025-12-16T03:25:31.716679748Z" level=info msg="Connect containerd service" Dec 16 03:25:31.718724 containerd[1591]: time="2025-12-16T03:25:31.717686232Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 03:25:31.726229 containerd[1591]: time="2025-12-16T03:25:31.723458633Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 03:25:31.752263 init.sh[1673]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 16 03:25:31.758284 init.sh[1673]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 16 03:25:31.759410 init.sh[1673]: + /usr/bin/google_instance_setup Dec 16 03:25:31.840089 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 03:25:31.869386 polkitd[1666]: Started polkitd version 126 Dec 16 03:25:31.890345 polkitd[1666]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 03:25:31.891036 polkitd[1666]: Loading rules from directory /run/polkit-1/rules.d Dec 16 03:25:31.891103 polkitd[1666]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 03:25:31.892690 polkitd[1666]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 03:25:31.893030 polkitd[1666]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 03:25:31.893337 polkitd[1666]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 03:25:31.897377 polkitd[1666]: Finished loading, compiling and executing 2 rules Dec 16 03:25:31.898496 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 03:25:31.899182 dbus-daemon[1545]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 03:25:31.900052 polkitd[1666]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 03:25:31.946750 systemd-hostnamed[1641]: Hostname set to (transient) Dec 16 03:25:31.949828 systemd-resolved[1256]: System hostname changed to 'ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal'. Dec 16 03:25:31.967510 sshd[1650]: Accepted publickey for core from 147.75.109.163 port 55760 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:25:31.975697 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:31.999702 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 03:25:32.012565 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 03:25:32.050545 systemd-logind[1566]: New session 1 of user core. Dec 16 03:25:32.064210 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 03:25:32.083901 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 03:25:32.142984 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:32.153327 systemd-logind[1566]: New session 2 of user core. Dec 16 03:25:32.164483 containerd[1591]: time="2025-12-16T03:25:32.163494005Z" level=info msg="Start subscribing containerd event" Dec 16 03:25:32.164483 containerd[1591]: time="2025-12-16T03:25:32.164269068Z" level=info msg="Start recovering state" Dec 16 03:25:32.166434 containerd[1591]: time="2025-12-16T03:25:32.164654229Z" level=info msg="Start event monitor" Dec 16 03:25:32.166434 containerd[1591]: time="2025-12-16T03:25:32.164695555Z" level=info msg="Start cni network conf syncer for default" Dec 16 03:25:32.171278 containerd[1591]: time="2025-12-16T03:25:32.164708416Z" level=info msg="Start streaming server" Dec 16 03:25:32.171406 containerd[1591]: time="2025-12-16T03:25:32.171383880Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 03:25:32.172140 containerd[1591]: time="2025-12-16T03:25:32.171506314Z" level=info msg="runtime interface starting up..." Dec 16 03:25:32.172140 containerd[1591]: time="2025-12-16T03:25:32.171526626Z" level=info msg="starting plugins..." Dec 16 03:25:32.172140 containerd[1591]: time="2025-12-16T03:25:32.171555276Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 03:25:32.172140 containerd[1591]: time="2025-12-16T03:25:32.168496521Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 03:25:32.172140 containerd[1591]: time="2025-12-16T03:25:32.171758943Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 03:25:32.172030 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 03:25:32.178003 containerd[1591]: time="2025-12-16T03:25:32.177522928Z" level=info msg="containerd successfully booted in 0.541478s" Dec 16 03:25:32.277286 tar[1587]: linux-amd64/README.md Dec 16 03:25:32.332844 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 03:25:32.495402 systemd[1708]: Queued start job for default target default.target. Dec 16 03:25:32.502428 systemd[1708]: Created slice app.slice - User Application Slice. Dec 16 03:25:32.502488 systemd[1708]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 03:25:32.502514 systemd[1708]: Reached target paths.target - Paths. Dec 16 03:25:32.502793 systemd[1708]: Reached target timers.target - Timers. Dec 16 03:25:32.507385 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 03:25:32.509561 systemd[1708]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 03:25:32.551576 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 03:25:32.551737 systemd[1708]: Reached target sockets.target - Sockets. Dec 16 03:25:32.557145 systemd[1708]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 03:25:32.557462 systemd[1708]: Reached target basic.target - Basic System. Dec 16 03:25:32.557652 systemd[1708]: Reached target default.target - Main User Target. Dec 16 03:25:32.557715 systemd[1708]: Startup finished in 378ms. Dec 16 03:25:32.557941 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 03:25:32.572677 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 03:25:32.743790 systemd[1]: Started sshd@1-10.128.0.16:22-147.75.109.163:38244.service - OpenSSH per-connection server daemon (147.75.109.163:38244). Dec 16 03:25:32.788782 instance-setup[1680]: INFO Running google_set_multiqueue. Dec 16 03:25:32.817398 instance-setup[1680]: INFO Set channels for eth0 to 2. Dec 16 03:25:32.825346 instance-setup[1680]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 16 03:25:32.827296 instance-setup[1680]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 16 03:25:32.829293 instance-setup[1680]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 16 03:25:32.830247 instance-setup[1680]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 16 03:25:32.830326 instance-setup[1680]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 16 03:25:32.832321 instance-setup[1680]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 16 03:25:32.832843 instance-setup[1680]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 16 03:25:32.835270 instance-setup[1680]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 16 03:25:32.844762 instance-setup[1680]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 16 03:25:32.856359 instance-setup[1680]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 16 03:25:32.858270 instance-setup[1680]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 16 03:25:32.858370 instance-setup[1680]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 16 03:25:32.881589 init.sh[1673]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 16 03:25:33.048530 startup-script[1758]: INFO Starting startup scripts. Dec 16 03:25:33.056880 startup-script[1758]: INFO No startup scripts found in metadata. Dec 16 03:25:33.056961 startup-script[1758]: INFO Finished running startup scripts. Dec 16 03:25:33.058289 sshd[1727]: Accepted publickey for core from 147.75.109.163 port 38244 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:25:33.062533 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:33.076484 systemd-logind[1566]: New session 3 of user core. Dec 16 03:25:33.084247 init.sh[1673]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 16 03:25:33.084386 init.sh[1673]: + daemon_pids=() Dec 16 03:25:33.084501 init.sh[1673]: + for d in accounts clock_skew network Dec 16 03:25:33.084873 init.sh[1673]: + daemon_pids+=($!) Dec 16 03:25:33.084918 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 03:25:33.085321 init.sh[1673]: + for d in accounts clock_skew network Dec 16 03:25:33.085472 init.sh[1762]: + /usr/bin/google_accounts_daemon Dec 16 03:25:33.086086 init.sh[1673]: + daemon_pids+=($!) Dec 16 03:25:33.086943 init.sh[1763]: + /usr/bin/google_clock_skew_daemon Dec 16 03:25:33.087964 init.sh[1673]: + for d in accounts clock_skew network Dec 16 03:25:33.087964 init.sh[1673]: + daemon_pids+=($!) Dec 16 03:25:33.087964 init.sh[1673]: + NOTIFY_SOCKET=/run/systemd/notify Dec 16 03:25:33.087964 init.sh[1673]: + /usr/bin/systemd-notify --ready Dec 16 03:25:33.088173 init.sh[1764]: + /usr/bin/google_network_daemon Dec 16 03:25:33.120605 systemd[1]: Started oem-gce.service - GCE Linux Agent. Dec 16 03:25:33.133384 init.sh[1673]: + wait -n 1762 1763 1764 Dec 16 03:25:33.211453 sshd[1766]: Connection closed by 147.75.109.163 port 38244 Dec 16 03:25:33.213344 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:33.224800 systemd[1]: sshd@1-10.128.0.16:22-147.75.109.163:38244.service: Deactivated successfully. Dec 16 03:25:33.230483 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 03:25:33.236799 systemd-logind[1566]: Session 3 logged out. Waiting for processes to exit. Dec 16 03:25:33.239048 systemd-logind[1566]: Removed session 3. Dec 16 03:25:33.274297 systemd[1]: Started sshd@2-10.128.0.16:22-147.75.109.163:38252.service - OpenSSH per-connection server daemon (147.75.109.163:38252). Dec 16 03:25:33.534337 google-clock-skew[1763]: INFO Starting Google Clock Skew daemon. Dec 16 03:25:33.559516 google-clock-skew[1763]: INFO Clock drift token has changed: 0. Dec 16 03:25:33.561029 google-networking[1764]: INFO Starting Google Networking daemon. Dec 16 03:25:33.610537 sshd[1773]: Accepted publickey for core from 147.75.109.163 port 38252 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:25:33.612308 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:33.625606 systemd-logind[1566]: New session 4 of user core. Dec 16 03:25:33.628173 ntpd[1553]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:10%2]:123 Dec 16 03:25:33.629611 ntpd[1553]: 16 Dec 03:25:33 ntpd[1553]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:10%2]:123 Dec 16 03:25:33.632507 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 03:25:33.670436 groupadd[1785]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 16 03:25:33.675107 groupadd[1785]: group added to /etc/gshadow: name=google-sudoers Dec 16 03:25:33.756956 sshd[1786]: Connection closed by 147.75.109.163 port 38252 Dec 16 03:25:33.758382 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:33.766178 systemd[1]: sshd@2-10.128.0.16:22-147.75.109.163:38252.service: Deactivated successfully. Dec 16 03:25:33.769862 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 03:25:33.777124 systemd-logind[1566]: Session 4 logged out. Waiting for processes to exit. Dec 16 03:25:33.778898 systemd-logind[1566]: Removed session 4. Dec 16 03:25:33.848314 groupadd[1785]: new group: name=google-sudoers, GID=1000 Dec 16 03:25:33.905961 google-accounts[1762]: INFO Starting Google Accounts daemon. Dec 16 03:25:33.913955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:25:33.918862 google-accounts[1762]: WARNING OS Login not installed. Dec 16 03:25:33.920424 google-accounts[1762]: INFO Creating a new user account for 0. Dec 16 03:25:33.924889 init.sh[1805]: useradd: invalid user name '0': use --badname to ignore Dec 16 03:25:33.925747 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 03:25:33.926853 google-accounts[1762]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 16 03:25:33.934656 systemd[1]: Startup finished in 2.801s (kernel) + 9.447s (initrd) + 10.054s (userspace) = 22.304s. Dec 16 03:25:33.941687 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:25:34.000052 systemd-resolved[1256]: Clock change detected. Flushing caches. Dec 16 03:25:34.002629 google-clock-skew[1763]: INFO Synced system time with hardware clock. Dec 16 03:25:34.469626 kubelet[1803]: E1216 03:25:34.469485 1803 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:25:34.472610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:25:34.472890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:25:34.473541 systemd[1]: kubelet.service: Consumed 1.297s CPU time, 269.1M memory peak. Dec 16 03:25:43.518839 systemd[1]: Started sshd@3-10.128.0.16:22-147.75.109.163:59726.service - OpenSSH per-connection server daemon (147.75.109.163:59726). Dec 16 03:25:43.801791 sshd[1818]: Accepted publickey for core from 147.75.109.163 port 59726 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:25:43.803589 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:43.810983 systemd-logind[1566]: New session 5 of user core. Dec 16 03:25:43.818137 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 03:25:43.934706 sshd[1822]: Connection closed by 147.75.109.163 port 59726 Dec 16 03:25:43.935435 sshd-session[1818]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:43.941231 systemd[1]: sshd@3-10.128.0.16:22-147.75.109.163:59726.service: Deactivated successfully. Dec 16 03:25:43.943661 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 03:25:43.945108 systemd-logind[1566]: Session 5 logged out. Waiting for processes to exit. Dec 16 03:25:43.947027 systemd-logind[1566]: Removed session 5. Dec 16 03:25:43.987083 systemd[1]: Started sshd@4-10.128.0.16:22-147.75.109.163:59740.service - OpenSSH per-connection server daemon (147.75.109.163:59740). Dec 16 03:25:44.267368 sshd[1828]: Accepted publickey for core from 147.75.109.163 port 59740 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:25:44.269275 sshd-session[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:44.276741 systemd-logind[1566]: New session 6 of user core. Dec 16 03:25:44.283168 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 03:25:44.394052 sshd[1832]: Connection closed by 147.75.109.163 port 59740 Dec 16 03:25:44.394830 sshd-session[1828]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:44.401074 systemd[1]: sshd@4-10.128.0.16:22-147.75.109.163:59740.service: Deactivated successfully. Dec 16 03:25:44.403422 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 03:25:44.404715 systemd-logind[1566]: Session 6 logged out. Waiting for processes to exit. Dec 16 03:25:44.406777 systemd-logind[1566]: Removed session 6. Dec 16 03:25:44.451785 systemd[1]: Started sshd@5-10.128.0.16:22-147.75.109.163:59748.service - OpenSSH per-connection server daemon (147.75.109.163:59748). Dec 16 03:25:44.483869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 03:25:44.489202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:25:44.755325 sshd[1838]: Accepted publickey for core from 147.75.109.163 port 59748 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:25:44.757082 sshd-session[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:44.763747 systemd-logind[1566]: New session 7 of user core. Dec 16 03:25:44.775556 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 03:25:44.844207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:25:44.864370 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:25:44.892638 sshd[1845]: Connection closed by 147.75.109.163 port 59748 Dec 16 03:25:44.894215 sshd-session[1838]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:44.903696 systemd[1]: sshd@5-10.128.0.16:22-147.75.109.163:59748.service: Deactivated successfully. Dec 16 03:25:44.904457 systemd-logind[1566]: Session 7 logged out. Waiting for processes to exit. Dec 16 03:25:44.909419 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 03:25:44.914720 systemd-logind[1566]: Removed session 7. Dec 16 03:25:44.920596 kubelet[1852]: E1216 03:25:44.920546 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:25:44.925144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:25:44.925373 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:25:44.925935 systemd[1]: kubelet.service: Consumed 202ms CPU time, 107.9M memory peak. Dec 16 03:25:44.950816 systemd[1]: Started sshd@6-10.128.0.16:22-147.75.109.163:59762.service - OpenSSH per-connection server daemon (147.75.109.163:59762). Dec 16 03:25:45.238312 sshd[1863]: Accepted publickey for core from 147.75.109.163 port 59762 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:25:45.239535 sshd-session[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:45.246978 systemd-logind[1566]: New session 8 of user core. Dec 16 03:25:45.254152 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 03:25:45.352114 sudo[1868]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 03:25:45.352644 sudo[1868]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:25:45.366137 sudo[1868]: pam_unix(sudo:session): session closed for user root Dec 16 03:25:45.409074 sshd[1867]: Connection closed by 147.75.109.163 port 59762 Dec 16 03:25:45.410055 sshd-session[1863]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:45.415719 systemd[1]: sshd@6-10.128.0.16:22-147.75.109.163:59762.service: Deactivated successfully. Dec 16 03:25:45.418108 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 03:25:45.420732 systemd-logind[1566]: Session 8 logged out. Waiting for processes to exit. Dec 16 03:25:45.422379 systemd-logind[1566]: Removed session 8. Dec 16 03:25:45.463955 systemd[1]: Started sshd@7-10.128.0.16:22-147.75.109.163:59778.service - OpenSSH per-connection server daemon (147.75.109.163:59778). Dec 16 03:25:45.737443 sshd[1875]: Accepted publickey for core from 147.75.109.163 port 59778 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:25:45.739429 sshd-session[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:45.746981 systemd-logind[1566]: New session 9 of user core. Dec 16 03:25:45.756147 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 03:25:45.838582 sudo[1881]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 03:25:45.839133 sudo[1881]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:25:45.842849 sudo[1881]: pam_unix(sudo:session): session closed for user root Dec 16 03:25:45.857344 sudo[1880]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 03:25:45.857838 sudo[1880]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:25:45.867941 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:25:45.925592 kernel: kauditd_printk_skb: 60 callbacks suppressed Dec 16 03:25:45.925716 kernel: audit: type=1305 audit(1765855545.919:215): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:25:45.919000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:25:45.925863 augenrules[1905]: No rules Dec 16 03:25:45.927081 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:25:45.927438 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:25:45.930178 sudo[1880]: pam_unix(sudo:session): session closed for user root Dec 16 03:25:45.919000 audit[1905]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff774c8f80 a2=420 a3=0 items=0 ppid=1886 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:45.970581 kernel: audit: type=1300 audit(1765855545.919:215): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff774c8f80 a2=420 a3=0 items=0 ppid=1886 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:45.970661 kernel: audit: type=1327 audit(1765855545.919:215): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:25:45.919000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:25:45.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:45.994010 systemd[1]: sshd@7-10.128.0.16:22-147.75.109.163:59778.service: Deactivated successfully. Dec 16 03:25:45.986157 sshd-session[1875]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:46.004671 sshd[1879]: Connection closed by 147.75.109.163 port 59778 Dec 16 03:25:45.997308 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 03:25:46.005726 kernel: audit: type=1130 audit(1765855545.928:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:46.001349 systemd-logind[1566]: Session 9 logged out. Waiting for processes to exit. Dec 16 03:25:46.006572 kernel: audit: type=1131 audit(1765855545.928:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:45.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:46.003247 systemd-logind[1566]: Removed session 9. Dec 16 03:25:46.028957 kernel: audit: type=1106 audit(1765855545.929:218): pid=1880 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:25:45.929000 audit[1880]: USER_END pid=1880 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:25:46.052241 kernel: audit: type=1104 audit(1765855545.929:219): pid=1880 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:25:45.929000 audit[1880]: CRED_DISP pid=1880 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:25:45.989000 audit[1875]: USER_END pid=1875 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:25:46.109456 kernel: audit: type=1106 audit(1765855545.989:220): pid=1875 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:25:46.126440 kernel: audit: type=1104 audit(1765855545.989:221): pid=1875 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:25:45.989000 audit[1875]: CRED_DISP pid=1875 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:25:46.120352 systemd[1]: Started sshd@8-10.128.0.16:22-147.75.109.163:59782.service - OpenSSH per-connection server daemon (147.75.109.163:59782). Dec 16 03:25:45.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.16:22-147.75.109.163:59778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:46.159376 kernel: audit: type=1131 audit(1765855545.993:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.16:22-147.75.109.163:59778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:46.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.16:22-147.75.109.163:59782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:46.417000 audit[1915]: USER_ACCT pid=1915 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:25:46.419207 sshd[1915]: Accepted publickey for core from 147.75.109.163 port 59782 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:25:46.419000 audit[1915]: CRED_ACQ pid=1915 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:25:46.419000 audit[1915]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6a94def0 a2=3 a3=0 items=0 ppid=1 pid=1915 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:46.419000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:46.421319 sshd-session[1915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:46.427990 systemd-logind[1566]: New session 10 of user core. Dec 16 03:25:46.435148 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 03:25:46.439000 audit[1915]: USER_START pid=1915 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:25:46.441000 audit[1919]: CRED_ACQ pid=1919 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:25:46.518000 audit[1920]: USER_ACCT pid=1920 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:25:46.519646 sudo[1920]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 03:25:46.518000 audit[1920]: CRED_REFR pid=1920 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:25:46.520212 sudo[1920]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:25:46.519000 audit[1920]: USER_START pid=1920 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:25:47.040228 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 03:25:47.056414 (dockerd)[1939]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 03:25:47.438729 dockerd[1939]: time="2025-12-16T03:25:47.438078634Z" level=info msg="Starting up" Dec 16 03:25:47.440215 dockerd[1939]: time="2025-12-16T03:25:47.440181819Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 03:25:47.455622 dockerd[1939]: time="2025-12-16T03:25:47.455576807Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 03:25:47.504624 dockerd[1939]: time="2025-12-16T03:25:47.504148969Z" level=info msg="Loading containers: start." Dec 16 03:25:47.522942 kernel: Initializing XFRM netlink socket Dec 16 03:25:47.610000 audit[1986]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.610000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe80f438d0 a2=0 a3=0 items=0 ppid=1939 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.610000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:25:47.612000 audit[1988]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.612000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcc5c9f5f0 a2=0 a3=0 items=0 ppid=1939 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.612000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:25:47.616000 audit[1990]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.616000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff316a0320 a2=0 a3=0 items=0 ppid=1939 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.616000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:25:47.619000 audit[1992]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.619000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff293daba0 a2=0 a3=0 items=0 ppid=1939 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.619000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:25:47.621000 audit[1994]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.621000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc069f0690 a2=0 a3=0 items=0 ppid=1939 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.621000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:25:47.624000 audit[1996]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.624000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc53478ad0 a2=0 a3=0 items=0 ppid=1939 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.624000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:25:47.627000 audit[1998]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.627000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffde8181140 a2=0 a3=0 items=0 ppid=1939 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.627000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:25:47.631000 audit[2000]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.631000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffcbc69b1e0 a2=0 a3=0 items=0 ppid=1939 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.631000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:25:47.668000 audit[2003]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.668000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffef9cad560 a2=0 a3=0 items=0 ppid=1939 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.668000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 03:25:47.672000 audit[2005]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.672000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd20ccd1a0 a2=0 a3=0 items=0 ppid=1939 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.672000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:25:47.675000 audit[2007]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.675000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffdff374f90 a2=0 a3=0 items=0 ppid=1939 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.675000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:25:47.678000 audit[2009]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.678000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc76fc72e0 a2=0 a3=0 items=0 ppid=1939 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.678000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:25:47.681000 audit[2011]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.681000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffddc1605f0 a2=0 a3=0 items=0 ppid=1939 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.681000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:25:47.737000 audit[2041]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.737000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffb9f496f0 a2=0 a3=0 items=0 ppid=1939 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.737000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:25:47.741000 audit[2043]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.741000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff7f364200 a2=0 a3=0 items=0 ppid=1939 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.741000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:25:47.743000 audit[2045]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.743000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7676aec0 a2=0 a3=0 items=0 ppid=1939 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.743000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:25:47.746000 audit[2047]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.746000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4fd2a1c0 a2=0 a3=0 items=0 ppid=1939 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.746000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:25:47.749000 audit[2049]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.749000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe3113bb00 a2=0 a3=0 items=0 ppid=1939 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.749000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:25:47.752000 audit[2051]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.752000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeff0724d0 a2=0 a3=0 items=0 ppid=1939 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.752000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:25:47.755000 audit[2053]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.755000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcc4ffd180 a2=0 a3=0 items=0 ppid=1939 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.755000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:25:47.759000 audit[2055]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2055 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.759000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffca2ab22c0 a2=0 a3=0 items=0 ppid=1939 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.759000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:25:47.763000 audit[2057]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2057 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.763000 audit[2057]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc8e013950 a2=0 a3=0 items=0 ppid=1939 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.763000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 03:25:47.765000 audit[2059]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2059 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.765000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd1463a2e0 a2=0 a3=0 items=0 ppid=1939 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.765000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:25:47.769000 audit[2061]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.769000 audit[2061]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffc9b3a560 a2=0 a3=0 items=0 ppid=1939 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.769000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:25:47.772000 audit[2063]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2063 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.772000 audit[2063]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe628fc1c0 a2=0 a3=0 items=0 ppid=1939 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.772000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:25:47.775000 audit[2065]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.775000 audit[2065]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff3eebb080 a2=0 a3=0 items=0 ppid=1939 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.775000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:25:47.783000 audit[2070]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.783000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc71944050 a2=0 a3=0 items=0 ppid=1939 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.783000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:25:47.787000 audit[2072]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.787000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc717efca0 a2=0 a3=0 items=0 ppid=1939 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.787000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:25:47.790000 audit[2074]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.790000 audit[2074]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffcfac17b0 a2=0 a3=0 items=0 ppid=1939 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.790000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:25:47.793000 audit[2076]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.793000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdb5411350 a2=0 a3=0 items=0 ppid=1939 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.793000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:25:47.796000 audit[2078]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.796000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc26810f60 a2=0 a3=0 items=0 ppid=1939 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.796000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:25:47.799000 audit[2080]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:25:47.799000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffef0c72060 a2=0 a3=0 items=0 ppid=1939 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.799000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:25:47.824000 audit[2085]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.824000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffda65360e0 a2=0 a3=0 items=0 ppid=1939 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.824000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 03:25:47.828000 audit[2087]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.828000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff34d35830 a2=0 a3=0 items=0 ppid=1939 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.828000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 03:25:47.841000 audit[2095]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.841000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc37a87ce0 a2=0 a3=0 items=0 ppid=1939 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.841000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 03:25:47.853000 audit[2101]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.853000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffb9730130 a2=0 a3=0 items=0 ppid=1939 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.853000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 03:25:47.857000 audit[2103]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.857000 audit[2103]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffed0975820 a2=0 a3=0 items=0 ppid=1939 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.857000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 03:25:47.860000 audit[2105]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.860000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe56e44870 a2=0 a3=0 items=0 ppid=1939 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.860000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 03:25:47.863000 audit[2107]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.863000 audit[2107]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffdcc2ef740 a2=0 a3=0 items=0 ppid=1939 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.863000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:25:47.866000 audit[2109]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:25:47.866000 audit[2109]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd4ddd0400 a2=0 a3=0 items=0 ppid=1939 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.866000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 03:25:47.867777 systemd-networkd[1501]: docker0: Link UP Dec 16 03:25:47.873226 dockerd[1939]: time="2025-12-16T03:25:47.873178753Z" level=info msg="Loading containers: done." Dec 16 03:25:47.895720 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3368413080-merged.mount: Deactivated successfully. Dec 16 03:25:47.900336 dockerd[1939]: time="2025-12-16T03:25:47.900280919Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 03:25:47.900509 dockerd[1939]: time="2025-12-16T03:25:47.900369508Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 03:25:47.900509 dockerd[1939]: time="2025-12-16T03:25:47.900474853Z" level=info msg="Initializing buildkit" Dec 16 03:25:47.930187 dockerd[1939]: time="2025-12-16T03:25:47.930152427Z" level=info msg="Completed buildkit initialization" Dec 16 03:25:47.938624 dockerd[1939]: time="2025-12-16T03:25:47.938576577Z" level=info msg="Daemon has completed initialization" Dec 16 03:25:47.938862 dockerd[1939]: time="2025-12-16T03:25:47.938774061Z" level=info msg="API listen on /run/docker.sock" Dec 16 03:25:47.938999 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 03:25:47.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:48.858706 containerd[1591]: time="2025-12-16T03:25:48.858644986Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 03:25:49.213798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount936086937.mount: Deactivated successfully. Dec 16 03:25:50.747447 containerd[1591]: time="2025-12-16T03:25:50.747379644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:25:50.748984 containerd[1591]: time="2025-12-16T03:25:50.748938150Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Dec 16 03:25:50.751946 containerd[1591]: time="2025-12-16T03:25:50.750119967Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:25:50.754326 containerd[1591]: time="2025-12-16T03:25:50.754266462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:25:50.755940 containerd[1591]: time="2025-12-16T03:25:50.755655778Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.896959224s" Dec 16 03:25:50.755940 containerd[1591]: time="2025-12-16T03:25:50.755703407Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 03:25:50.756543 containerd[1591]: time="2025-12-16T03:25:50.756498901Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 03:25:52.376989 containerd[1591]: time="2025-12-16T03:25:52.376903513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:25:52.378379 containerd[1591]: time="2025-12-16T03:25:52.378172091Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Dec 16 03:25:52.379338 containerd[1591]: time="2025-12-16T03:25:52.379296364Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:25:52.382304 containerd[1591]: time="2025-12-16T03:25:52.382268355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:25:52.384715 containerd[1591]: time="2025-12-16T03:25:52.383853611Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.627313724s" Dec 16 03:25:52.384715 containerd[1591]: time="2025-12-16T03:25:52.383896364Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 03:25:52.384715 containerd[1591]: time="2025-12-16T03:25:52.384384673Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 03:25:53.733029 containerd[1591]: time="2025-12-16T03:25:53.732965210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:25:53.734445 containerd[1591]: time="2025-12-16T03:25:53.734323356Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=0" Dec 16 03:25:53.735355 containerd[1591]: time="2025-12-16T03:25:53.735309183Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:25:53.738609 containerd[1591]: time="2025-12-16T03:25:53.738537211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:25:53.740656 containerd[1591]: time="2025-12-16T03:25:53.739770319Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.355351157s" Dec 16 03:25:53.740656 containerd[1591]: time="2025-12-16T03:25:53.739813992Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 03:25:53.740958 containerd[1591]: time="2025-12-16T03:25:53.740926826Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 03:25:54.822872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1915833221.mount: Deactivated successfully. Dec 16 03:25:55.109148 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 03:25:55.115205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:25:55.435181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:25:55.444082 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 03:25:55.444194 kernel: audit: type=1130 audit(1765855555.434:273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:55.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:55.471426 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:25:55.573742 kubelet[2232]: E1216 03:25:55.573673 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:25:55.577664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:25:55.577985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:25:55.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:25:55.579808 systemd[1]: kubelet.service: Consumed 247ms CPU time, 108.7M memory peak. Dec 16 03:25:55.602057 kernel: audit: type=1131 audit(1765855555.578:274): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:25:55.749791 containerd[1591]: time="2025-12-16T03:25:55.749436508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:25:55.751368 containerd[1591]: time="2025-12-16T03:25:55.751141801Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=0" Dec 16 03:25:55.752260 containerd[1591]: time="2025-12-16T03:25:55.752219528Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:25:55.754707 containerd[1591]: time="2025-12-16T03:25:55.754670138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:25:55.755864 containerd[1591]: time="2025-12-16T03:25:55.755416014Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.01430808s" Dec 16 03:25:55.755864 containerd[1591]: time="2025-12-16T03:25:55.755460993Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 03:25:55.756063 containerd[1591]: time="2025-12-16T03:25:55.756045875Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 03:25:56.064517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3267240735.mount: Deactivated successfully. Dec 16 03:25:57.172298 containerd[1591]: time="2025-12-16T03:25:57.172219572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:25:57.173895 containerd[1591]: time="2025-12-16T03:25:57.173580328Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20580977" Dec 16 03:25:57.175032 containerd[1591]: time="2025-12-16T03:25:57.174987315Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:25:57.178329 containerd[1591]: time="2025-12-16T03:25:57.178286817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:25:57.179778 containerd[1591]: time="2025-12-16T03:25:57.179738053Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.423654114s" Dec 16 03:25:57.179945 containerd[1591]: time="2025-12-16T03:25:57.179901668Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 03:25:57.181082 containerd[1591]: time="2025-12-16T03:25:57.181027296Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 03:25:57.547508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1479694465.mount: Deactivated successfully. Dec 16 03:25:57.553722 containerd[1591]: time="2025-12-16T03:25:57.553666971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:25:57.554848 containerd[1591]: time="2025-12-16T03:25:57.554560156Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 03:25:57.555936 containerd[1591]: time="2025-12-16T03:25:57.555881152Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:25:57.559554 containerd[1591]: time="2025-12-16T03:25:57.558478937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:25:57.559554 containerd[1591]: time="2025-12-16T03:25:57.559413407Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 378.228761ms" Dec 16 03:25:57.559554 containerd[1591]: time="2025-12-16T03:25:57.559448092Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 03:25:57.560376 containerd[1591]: time="2025-12-16T03:25:57.560320966Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 03:25:57.917348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482884454.mount: Deactivated successfully. Dec 16 03:26:00.207686 containerd[1591]: time="2025-12-16T03:26:00.207617692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:00.209310 containerd[1591]: time="2025-12-16T03:26:00.209178199Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=56977083" Dec 16 03:26:00.210353 containerd[1591]: time="2025-12-16T03:26:00.210306613Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:00.214029 containerd[1591]: time="2025-12-16T03:26:00.213990549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:00.215807 containerd[1591]: time="2025-12-16T03:26:00.215522159Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.655160653s" Dec 16 03:26:00.215807 containerd[1591]: time="2025-12-16T03:26:00.215566433Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 03:26:01.672392 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 03:26:01.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:01.696935 kernel: audit: type=1131 audit(1765855561.672:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:01.703000 audit: BPF prog-id=62 op=UNLOAD Dec 16 03:26:01.711960 kernel: audit: type=1334 audit(1765855561.703:276): prog-id=62 op=UNLOAD Dec 16 03:26:04.292387 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:26:04.292826 systemd[1]: kubelet.service: Consumed 247ms CPU time, 108.7M memory peak. Dec 16 03:26:04.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:04.305655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:26:04.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:04.336684 kernel: audit: type=1130 audit(1765855564.291:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:04.336792 kernel: audit: type=1131 audit(1765855564.291:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:04.367101 systemd[1]: Reload requested from client PID 2383 ('systemctl') (unit session-10.scope)... Dec 16 03:26:04.367121 systemd[1]: Reloading... Dec 16 03:26:04.529943 zram_generator::config[2427]: No configuration found. Dec 16 03:26:04.912905 systemd[1]: Reloading finished in 545 ms. Dec 16 03:26:04.938000 audit: BPF prog-id=66 op=LOAD Dec 16 03:26:04.946941 kernel: audit: type=1334 audit(1765855564.938:279): prog-id=66 op=LOAD Dec 16 03:26:04.945000 audit: BPF prog-id=52 op=UNLOAD Dec 16 03:26:04.961672 kernel: audit: type=1334 audit(1765855564.945:280): prog-id=52 op=UNLOAD Dec 16 03:26:04.946000 audit: BPF prog-id=67 op=LOAD Dec 16 03:26:04.946000 audit: BPF prog-id=68 op=LOAD Dec 16 03:26:04.976019 kernel: audit: type=1334 audit(1765855564.946:281): prog-id=67 op=LOAD Dec 16 03:26:04.976098 kernel: audit: type=1334 audit(1765855564.946:282): prog-id=68 op=LOAD Dec 16 03:26:04.976140 kernel: audit: type=1334 audit(1765855564.946:283): prog-id=53 op=UNLOAD Dec 16 03:26:04.946000 audit: BPF prog-id=53 op=UNLOAD Dec 16 03:26:04.983065 kernel: audit: type=1334 audit(1765855564.946:284): prog-id=54 op=UNLOAD Dec 16 03:26:04.946000 audit: BPF prog-id=54 op=UNLOAD Dec 16 03:26:04.952000 audit: BPF prog-id=69 op=LOAD Dec 16 03:26:04.952000 audit: BPF prog-id=51 op=UNLOAD Dec 16 03:26:04.952000 audit: BPF prog-id=70 op=LOAD Dec 16 03:26:04.952000 audit: BPF prog-id=71 op=LOAD Dec 16 03:26:04.952000 audit: BPF prog-id=49 op=UNLOAD Dec 16 03:26:04.952000 audit: BPF prog-id=50 op=UNLOAD Dec 16 03:26:04.957000 audit: BPF prog-id=72 op=LOAD Dec 16 03:26:04.957000 audit: BPF prog-id=58 op=UNLOAD Dec 16 03:26:04.993000 audit: BPF prog-id=73 op=LOAD Dec 16 03:26:04.993000 audit: BPF prog-id=55 op=UNLOAD Dec 16 03:26:04.993000 audit: BPF prog-id=74 op=LOAD Dec 16 03:26:04.993000 audit: BPF prog-id=75 op=LOAD Dec 16 03:26:04.993000 audit: BPF prog-id=56 op=UNLOAD Dec 16 03:26:04.993000 audit: BPF prog-id=57 op=UNLOAD Dec 16 03:26:04.994000 audit: BPF prog-id=76 op=LOAD Dec 16 03:26:04.994000 audit: BPF prog-id=43 op=UNLOAD Dec 16 03:26:04.994000 audit: BPF prog-id=77 op=LOAD Dec 16 03:26:04.994000 audit: BPF prog-id=78 op=LOAD Dec 16 03:26:04.994000 audit: BPF prog-id=44 op=UNLOAD Dec 16 03:26:04.994000 audit: BPF prog-id=45 op=UNLOAD Dec 16 03:26:04.996000 audit: BPF prog-id=79 op=LOAD Dec 16 03:26:04.996000 audit: BPF prog-id=59 op=UNLOAD Dec 16 03:26:04.996000 audit: BPF prog-id=80 op=LOAD Dec 16 03:26:04.996000 audit: BPF prog-id=81 op=LOAD Dec 16 03:26:04.996000 audit: BPF prog-id=60 op=UNLOAD Dec 16 03:26:04.996000 audit: BPF prog-id=61 op=UNLOAD Dec 16 03:26:04.997000 audit: BPF prog-id=82 op=LOAD Dec 16 03:26:04.997000 audit: BPF prog-id=65 op=UNLOAD Dec 16 03:26:04.999000 audit: BPF prog-id=83 op=LOAD Dec 16 03:26:04.999000 audit: BPF prog-id=46 op=UNLOAD Dec 16 03:26:04.999000 audit: BPF prog-id=84 op=LOAD Dec 16 03:26:04.999000 audit: BPF prog-id=85 op=LOAD Dec 16 03:26:04.999000 audit: BPF prog-id=47 op=UNLOAD Dec 16 03:26:04.999000 audit: BPF prog-id=48 op=UNLOAD Dec 16 03:26:05.078636 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 03:26:05.078789 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 03:26:05.079263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:26:05.079344 systemd[1]: kubelet.service: Consumed 163ms CPU time, 98.5M memory peak. Dec 16 03:26:05.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:26:05.084406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:26:05.713646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:26:05.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:05.728465 (kubelet)[2480]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:26:05.784761 kubelet[2480]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:26:05.784761 kubelet[2480]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:26:05.784761 kubelet[2480]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:26:05.785382 kubelet[2480]: I1216 03:26:05.784865 2480 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:26:06.190576 kubelet[2480]: I1216 03:26:06.190510 2480 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 03:26:06.190576 kubelet[2480]: I1216 03:26:06.190552 2480 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:26:06.191061 kubelet[2480]: I1216 03:26:06.191020 2480 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 03:26:06.237753 kubelet[2480]: E1216 03:26:06.237690 2480 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 03:26:06.238140 kubelet[2480]: I1216 03:26:06.238113 2480 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:26:06.250249 kubelet[2480]: I1216 03:26:06.250208 2480 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:26:06.255216 kubelet[2480]: I1216 03:26:06.255164 2480 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:26:06.255728 kubelet[2480]: I1216 03:26:06.255645 2480 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:26:06.255968 kubelet[2480]: I1216 03:26:06.255706 2480 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:26:06.255968 kubelet[2480]: I1216 03:26:06.255963 2480 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:26:06.256230 kubelet[2480]: I1216 03:26:06.255981 2480 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 03:26:06.257458 kubelet[2480]: I1216 03:26:06.257411 2480 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:26:06.260888 kubelet[2480]: I1216 03:26:06.260720 2480 kubelet.go:480] "Attempting to sync node with API server" Dec 16 03:26:06.260888 kubelet[2480]: I1216 03:26:06.260751 2480 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:26:06.260888 kubelet[2480]: I1216 03:26:06.260786 2480 kubelet.go:386] "Adding apiserver pod source" Dec 16 03:26:06.260888 kubelet[2480]: I1216 03:26:06.260808 2480 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:26:06.269139 kubelet[2480]: E1216 03:26:06.268166 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 03:26:06.269139 kubelet[2480]: E1216 03:26:06.268746 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 03:26:06.269495 kubelet[2480]: I1216 03:26:06.269454 2480 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:26:06.270274 kubelet[2480]: I1216 03:26:06.270118 2480 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 03:26:06.271739 kubelet[2480]: W1216 03:26:06.271690 2480 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 03:26:06.288398 kubelet[2480]: I1216 03:26:06.288348 2480 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:26:06.288483 kubelet[2480]: I1216 03:26:06.288436 2480 server.go:1289] "Started kubelet" Dec 16 03:26:06.291675 kubelet[2480]: I1216 03:26:06.291463 2480 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:26:06.297827 kubelet[2480]: E1216 03:26:06.295455 2480 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal.188194481943c9d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal,UID:ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal,},FirstTimestamp:2025-12-16 03:26:06.288374224 +0000 UTC m=+0.553607987,LastTimestamp:2025-12-16 03:26:06.288374224 +0000 UTC m=+0.553607987,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal,}" Dec 16 03:26:06.298486 kubelet[2480]: I1216 03:26:06.298214 2480 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:26:06.299522 kubelet[2480]: I1216 03:26:06.299477 2480 server.go:317] "Adding debug handlers to kubelet server" Dec 16 03:26:06.302021 kubelet[2480]: I1216 03:26:06.301995 2480 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:26:06.303604 kubelet[2480]: E1216 03:26:06.302310 2480 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" not found" Dec 16 03:26:06.303604 kubelet[2480]: I1216 03:26:06.302669 2480 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:26:06.303604 kubelet[2480]: I1216 03:26:06.303252 2480 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:26:06.304000 audit[2495]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:06.304000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd0be8c4f0 a2=0 a3=0 items=0 ppid=2480 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.304000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:26:06.306328 kubelet[2480]: I1216 03:26:06.305112 2480 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:26:06.306328 kubelet[2480]: I1216 03:26:06.305388 2480 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:26:06.306328 kubelet[2480]: I1216 03:26:06.305659 2480 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:26:06.306000 audit[2496]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:06.306000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff22418b50 a2=0 a3=0 items=0 ppid=2480 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.306000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:26:06.310000 audit[2498]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:06.310000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdbb75e320 a2=0 a3=0 items=0 ppid=2480 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.310000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:26:06.311388 kubelet[2480]: E1216 03:26:06.311082 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 03:26:06.311978 kubelet[2480]: E1216 03:26:06.311846 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.16:6443: connect: connection refused" interval="200ms" Dec 16 03:26:06.312597 kubelet[2480]: I1216 03:26:06.312567 2480 factory.go:223] Registration of the systemd container factory successfully Dec 16 03:26:06.312932 kubelet[2480]: I1216 03:26:06.312885 2480 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:26:06.314747 kubelet[2480]: E1216 03:26:06.314391 2480 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:26:06.313000 audit[2500]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:06.313000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff97970a70 a2=0 a3=0 items=0 ppid=2480 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.313000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:26:06.316263 kubelet[2480]: I1216 03:26:06.316242 2480 factory.go:223] Registration of the containerd container factory successfully Dec 16 03:26:06.332000 audit[2504]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:06.332000 audit[2504]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe8eaf43e0 a2=0 a3=0 items=0 ppid=2480 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.332000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 03:26:06.334084 kubelet[2480]: I1216 03:26:06.334039 2480 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 03:26:06.334000 audit[2505]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:06.334000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd483403a0 a2=0 a3=0 items=0 ppid=2480 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.334000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:26:06.336936 kubelet[2480]: I1216 03:26:06.336888 2480 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 03:26:06.336936 kubelet[2480]: I1216 03:26:06.336928 2480 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 03:26:06.337078 kubelet[2480]: I1216 03:26:06.336951 2480 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:26:06.337078 kubelet[2480]: I1216 03:26:06.336961 2480 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 03:26:06.337078 kubelet[2480]: E1216 03:26:06.337020 2480 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:26:06.338000 audit[2507]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:06.338000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdcc646bf0 a2=0 a3=0 items=0 ppid=2480 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.338000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:26:06.341000 audit[2509]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:06.341000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff743d06b0 a2=0 a3=0 items=0 ppid=2480 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:26:06.343000 audit[2510]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:06.343000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbb4ac350 a2=0 a3=0 items=0 ppid=2480 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.343000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:26:06.345000 audit[2511]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:06.345000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdef595230 a2=0 a3=0 items=0 ppid=2480 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:26:06.348000 audit[2512]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:06.348000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc59bcfcd0 a2=0 a3=0 items=0 ppid=2480 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:26:06.350000 audit[2513]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:06.350000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc65f87070 a2=0 a3=0 items=0 ppid=2480 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:26:06.352198 kubelet[2480]: E1216 03:26:06.352159 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 03:26:06.352986 kubelet[2480]: I1216 03:26:06.352965 2480 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:26:06.353113 kubelet[2480]: I1216 03:26:06.353096 2480 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:26:06.353234 kubelet[2480]: I1216 03:26:06.353199 2480 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:26:06.355604 kubelet[2480]: I1216 03:26:06.355569 2480 policy_none.go:49] "None policy: Start" Dec 16 03:26:06.355604 kubelet[2480]: I1216 03:26:06.355606 2480 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:26:06.355739 kubelet[2480]: I1216 03:26:06.355626 2480 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:26:06.363638 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 03:26:06.376970 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 03:26:06.382348 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 03:26:06.391090 kubelet[2480]: E1216 03:26:06.391053 2480 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 03:26:06.391860 kubelet[2480]: I1216 03:26:06.391633 2480 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:26:06.391977 kubelet[2480]: I1216 03:26:06.391838 2480 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:26:06.393673 kubelet[2480]: E1216 03:26:06.393143 2480 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:26:06.393673 kubelet[2480]: E1216 03:26:06.393198 2480 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" not found" Dec 16 03:26:06.394163 kubelet[2480]: I1216 03:26:06.393843 2480 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:26:06.457198 systemd[1]: Created slice kubepods-burstable-pod557ca91ed163291f4d955f24a74d1609.slice - libcontainer container kubepods-burstable-pod557ca91ed163291f4d955f24a74d1609.slice. Dec 16 03:26:06.484785 kubelet[2480]: E1216 03:26:06.484723 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.489124 systemd[1]: Created slice kubepods-burstable-pod1239615b4819b2376024252d6d8fc954.slice - libcontainer container kubepods-burstable-pod1239615b4819b2376024252d6d8fc954.slice. Dec 16 03:26:06.496955 kubelet[2480]: I1216 03:26:06.496904 2480 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.497415 kubelet[2480]: E1216 03:26:06.497383 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.497975 kubelet[2480]: E1216 03:26:06.497938 2480 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.16:6443/api/v1/nodes\": dial tcp 10.128.0.16:6443: connect: connection refused" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.501443 systemd[1]: Created slice kubepods-burstable-podf41d53b681fa153a4554189b7d8937c3.slice - libcontainer container kubepods-burstable-podf41d53b681fa153a4554189b7d8937c3.slice. Dec 16 03:26:06.504285 kubelet[2480]: I1216 03:26:06.504250 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/557ca91ed163291f4d955f24a74d1609-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"557ca91ed163291f4d955f24a74d1609\") " pod="kube-system/kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.505339 kubelet[2480]: E1216 03:26:06.505311 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.512690 kubelet[2480]: E1216 03:26:06.512653 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.16:6443: connect: connection refused" interval="400ms" Dec 16 03:26:06.605278 kubelet[2480]: I1216 03:26:06.605239 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/557ca91ed163291f4d955f24a74d1609-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"557ca91ed163291f4d955f24a74d1609\") " pod="kube-system/kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.605540 kubelet[2480]: I1216 03:26:06.605290 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1239615b4819b2376024252d6d8fc954-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"1239615b4819b2376024252d6d8fc954\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.605540 kubelet[2480]: I1216 03:26:06.605332 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1239615b4819b2376024252d6d8fc954-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"1239615b4819b2376024252d6d8fc954\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.605540 kubelet[2480]: I1216 03:26:06.605361 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1239615b4819b2376024252d6d8fc954-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"1239615b4819b2376024252d6d8fc954\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.605540 kubelet[2480]: I1216 03:26:06.605429 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f41d53b681fa153a4554189b7d8937c3-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"f41d53b681fa153a4554189b7d8937c3\") " pod="kube-system/kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.605811 kubelet[2480]: I1216 03:26:06.605498 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/557ca91ed163291f4d955f24a74d1609-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"557ca91ed163291f4d955f24a74d1609\") " pod="kube-system/kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.605811 kubelet[2480]: I1216 03:26:06.605530 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1239615b4819b2376024252d6d8fc954-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"1239615b4819b2376024252d6d8fc954\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.605811 kubelet[2480]: I1216 03:26:06.605560 2480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1239615b4819b2376024252d6d8fc954-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"1239615b4819b2376024252d6d8fc954\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.691155 kubelet[2480]: E1216 03:26:06.691015 2480 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal.188194481943c9d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal,UID:ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal,},FirstTimestamp:2025-12-16 03:26:06.288374224 +0000 UTC m=+0.553607987,LastTimestamp:2025-12-16 03:26:06.288374224 +0000 UTC m=+0.553607987,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal,}" Dec 16 03:26:06.702683 kubelet[2480]: I1216 03:26:06.702645 2480 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.703097 kubelet[2480]: E1216 03:26:06.703042 2480 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.16:6443/api/v1/nodes\": dial tcp 10.128.0.16:6443: connect: connection refused" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:06.786738 containerd[1591]: time="2025-12-16T03:26:06.786574098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal,Uid:557ca91ed163291f4d955f24a74d1609,Namespace:kube-system,Attempt:0,}" Dec 16 03:26:06.798278 containerd[1591]: time="2025-12-16T03:26:06.798214290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal,Uid:1239615b4819b2376024252d6d8fc954,Namespace:kube-system,Attempt:0,}" Dec 16 03:26:06.813245 containerd[1591]: time="2025-12-16T03:26:06.813202853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal,Uid:f41d53b681fa153a4554189b7d8937c3,Namespace:kube-system,Attempt:0,}" Dec 16 03:26:06.821894 containerd[1591]: time="2025-12-16T03:26:06.821826457Z" level=info msg="connecting to shim 67090356243b4e8eb91e4193b631978d665bb718265c5b6e09fcc80da0a13859" address="unix:///run/containerd/s/1c8f8dbcd3b3c62c525e3169e1d66286c910a93a7a073d259d14dfa71d96537e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:26:06.875182 containerd[1591]: time="2025-12-16T03:26:06.875086688Z" level=info msg="connecting to shim d6e7119099274491a59c119acb8aa714a512d613a76e8e377ce051bfa447cd62" address="unix:///run/containerd/s/2fa53f50b1a17cf17024bb348e88e41e2d5c4bb914ebe2f8b7e1419237e1b40a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:26:06.884346 systemd[1]: Started cri-containerd-67090356243b4e8eb91e4193b631978d665bb718265c5b6e09fcc80da0a13859.scope - libcontainer container 67090356243b4e8eb91e4193b631978d665bb718265c5b6e09fcc80da0a13859. Dec 16 03:26:06.906880 containerd[1591]: time="2025-12-16T03:26:06.906748012Z" level=info msg="connecting to shim 5a4c1fd1eb1c897af051146b7926a4164b740f95f0a3f172b6444d717505f461" address="unix:///run/containerd/s/ccd3b597b0be7d8e3947d266cade3314a0a591dcae933f4b1e7cb1d5f0af5b5a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:26:06.913682 kubelet[2480]: E1216 03:26:06.913640 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.16:6443: connect: connection refused" interval="800ms" Dec 16 03:26:06.930000 audit: BPF prog-id=86 op=LOAD Dec 16 03:26:06.937256 kernel: kauditd_printk_skb: 72 callbacks suppressed Dec 16 03:26:06.937350 kernel: audit: type=1334 audit(1765855566.930:333): prog-id=86 op=LOAD Dec 16 03:26:06.942000 audit: BPF prog-id=87 op=LOAD Dec 16 03:26:06.953198 kernel: audit: type=1334 audit(1765855566.942:334): prog-id=87 op=LOAD Dec 16 03:26:06.942000 audit[2534]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2523 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.985962 kernel: audit: type=1300 audit(1765855566.942:334): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2523 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637303930333536323433623465386562393165343139336236333139 Dec 16 03:26:07.014935 kernel: audit: type=1327 audit(1765855566.942:334): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637303930333536323433623465386562393165343139336236333139 Dec 16 03:26:07.015015 kernel: audit: type=1334 audit(1765855566.942:335): prog-id=87 op=UNLOAD Dec 16 03:26:06.942000 audit: BPF prog-id=87 op=UNLOAD Dec 16 03:26:07.050987 kernel: audit: type=1300 audit(1765855566.942:335): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2523 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.942000 audit[2534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2523 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.023507 systemd[1]: Started cri-containerd-5a4c1fd1eb1c897af051146b7926a4164b740f95f0a3f172b6444d717505f461.scope - libcontainer container 5a4c1fd1eb1c897af051146b7926a4164b740f95f0a3f172b6444d717505f461. Dec 16 03:26:07.037986 systemd[1]: Started cri-containerd-d6e7119099274491a59c119acb8aa714a512d613a76e8e377ce051bfa447cd62.scope - libcontainer container d6e7119099274491a59c119acb8aa714a512d613a76e8e377ce051bfa447cd62. Dec 16 03:26:06.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637303930333536323433623465386562393165343139336236333139 Dec 16 03:26:07.089665 kernel: audit: type=1327 audit(1765855566.942:335): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637303930333536323433623465386562393165343139336236333139 Dec 16 03:26:07.089770 kernel: audit: type=1334 audit(1765855566.942:336): prog-id=88 op=LOAD Dec 16 03:26:06.942000 audit: BPF prog-id=88 op=LOAD Dec 16 03:26:06.942000 audit[2534]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2523 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637303930333536323433623465386562393165343139336236333139 Dec 16 03:26:07.123648 kubelet[2480]: I1216 03:26:07.122485 2480 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:07.123648 kubelet[2480]: E1216 03:26:07.123390 2480 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.16:6443/api/v1/nodes\": dial tcp 10.128.0.16:6443: connect: connection refused" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:07.149096 kernel: audit: type=1300 audit(1765855566.942:336): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2523 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.149196 kernel: audit: type=1327 audit(1765855566.942:336): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637303930333536323433623465386562393165343139336236333139 Dec 16 03:26:06.942000 audit: BPF prog-id=89 op=LOAD Dec 16 03:26:06.942000 audit[2534]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2523 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637303930333536323433623465386562393165343139336236333139 Dec 16 03:26:06.942000 audit: BPF prog-id=89 op=UNLOAD Dec 16 03:26:06.942000 audit[2534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2523 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637303930333536323433623465386562393165343139336236333139 Dec 16 03:26:06.942000 audit: BPF prog-id=88 op=UNLOAD Dec 16 03:26:06.942000 audit[2534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2523 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637303930333536323433623465386562393165343139336236333139 Dec 16 03:26:06.942000 audit: BPF prog-id=90 op=LOAD Dec 16 03:26:06.942000 audit[2534]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2523 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637303930333536323433623465386562393165343139336236333139 Dec 16 03:26:07.100000 audit: BPF prog-id=91 op=LOAD Dec 16 03:26:07.105000 audit: BPF prog-id=92 op=LOAD Dec 16 03:26:07.105000 audit[2581]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2555 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436653731313930393932373434393161353963313139616362386161 Dec 16 03:26:07.105000 audit: BPF prog-id=92 op=UNLOAD Dec 16 03:26:07.105000 audit[2581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2555 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436653731313930393932373434393161353963313139616362386161 Dec 16 03:26:07.105000 audit: BPF prog-id=93 op=LOAD Dec 16 03:26:07.105000 audit[2581]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2555 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436653731313930393932373434393161353963313139616362386161 Dec 16 03:26:07.105000 audit: BPF prog-id=94 op=LOAD Dec 16 03:26:07.105000 audit[2581]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2555 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436653731313930393932373434393161353963313139616362386161 Dec 16 03:26:07.105000 audit: BPF prog-id=94 op=UNLOAD Dec 16 03:26:07.105000 audit[2581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2555 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436653731313930393932373434393161353963313139616362386161 Dec 16 03:26:07.105000 audit: BPF prog-id=93 op=UNLOAD Dec 16 03:26:07.105000 audit[2581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2555 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436653731313930393932373434393161353963313139616362386161 Dec 16 03:26:07.106000 audit: BPF prog-id=95 op=LOAD Dec 16 03:26:07.105000 audit: BPF prog-id=96 op=LOAD Dec 16 03:26:07.105000 audit[2581]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2555 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436653731313930393932373434393161353963313139616362386161 Dec 16 03:26:07.107000 audit: BPF prog-id=97 op=LOAD Dec 16 03:26:07.107000 audit[2594]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2575 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561346331666431656231633839376166303531313436623739323661 Dec 16 03:26:07.107000 audit: BPF prog-id=97 op=UNLOAD Dec 16 03:26:07.107000 audit[2594]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561346331666431656231633839376166303531313436623739323661 Dec 16 03:26:07.107000 audit: BPF prog-id=98 op=LOAD Dec 16 03:26:07.107000 audit[2594]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2575 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561346331666431656231633839376166303531313436623739323661 Dec 16 03:26:07.107000 audit: BPF prog-id=99 op=LOAD Dec 16 03:26:07.107000 audit[2594]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2575 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561346331666431656231633839376166303531313436623739323661 Dec 16 03:26:07.108000 audit: BPF prog-id=99 op=UNLOAD Dec 16 03:26:07.108000 audit[2594]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561346331666431656231633839376166303531313436623739323661 Dec 16 03:26:07.148000 audit: BPF prog-id=98 op=UNLOAD Dec 16 03:26:07.148000 audit[2594]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561346331666431656231633839376166303531313436623739323661 Dec 16 03:26:07.151000 audit: BPF prog-id=100 op=LOAD Dec 16 03:26:07.151000 audit[2594]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2575 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561346331666431656231633839376166303531313436623739323661 Dec 16 03:26:07.160625 containerd[1591]: time="2025-12-16T03:26:07.160311579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal,Uid:557ca91ed163291f4d955f24a74d1609,Namespace:kube-system,Attempt:0,} returns sandbox id \"67090356243b4e8eb91e4193b631978d665bb718265c5b6e09fcc80da0a13859\"" Dec 16 03:26:07.166080 kubelet[2480]: E1216 03:26:07.166041 2480 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-21291" Dec 16 03:26:07.173206 containerd[1591]: time="2025-12-16T03:26:07.173160110Z" level=info msg="CreateContainer within sandbox \"67090356243b4e8eb91e4193b631978d665bb718265c5b6e09fcc80da0a13859\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 03:26:07.183739 containerd[1591]: time="2025-12-16T03:26:07.183695245Z" level=info msg="Container f65a814a8c41af83d9d27c168b3c0098e00cb1844593c38f32e5a57a1854169d: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:26:07.191169 kubelet[2480]: E1216 03:26:07.191123 2480 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 03:26:07.193035 containerd[1591]: time="2025-12-16T03:26:07.192940607Z" level=info msg="CreateContainer within sandbox \"67090356243b4e8eb91e4193b631978d665bb718265c5b6e09fcc80da0a13859\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f65a814a8c41af83d9d27c168b3c0098e00cb1844593c38f32e5a57a1854169d\"" Dec 16 03:26:07.193620 containerd[1591]: time="2025-12-16T03:26:07.193582939Z" level=info msg="StartContainer for \"f65a814a8c41af83d9d27c168b3c0098e00cb1844593c38f32e5a57a1854169d\"" Dec 16 03:26:07.197852 containerd[1591]: time="2025-12-16T03:26:07.197807174Z" level=info msg="connecting to shim f65a814a8c41af83d9d27c168b3c0098e00cb1844593c38f32e5a57a1854169d" address="unix:///run/containerd/s/1c8f8dbcd3b3c62c525e3169e1d66286c910a93a7a073d259d14dfa71d96537e" protocol=ttrpc version=3 Dec 16 03:26:07.246705 systemd[1]: Started cri-containerd-f65a814a8c41af83d9d27c168b3c0098e00cb1844593c38f32e5a57a1854169d.scope - libcontainer container f65a814a8c41af83d9d27c168b3c0098e00cb1844593c38f32e5a57a1854169d. Dec 16 03:26:07.263313 containerd[1591]: time="2025-12-16T03:26:07.263158450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal,Uid:1239615b4819b2376024252d6d8fc954,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6e7119099274491a59c119acb8aa714a512d613a76e8e377ce051bfa447cd62\"" Dec 16 03:26:07.267801 kubelet[2480]: E1216 03:26:07.267744 2480 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flat" Dec 16 03:26:07.274585 containerd[1591]: time="2025-12-16T03:26:07.274538206Z" level=info msg="CreateContainer within sandbox \"d6e7119099274491a59c119acb8aa714a512d613a76e8e377ce051bfa447cd62\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 03:26:07.275953 containerd[1591]: time="2025-12-16T03:26:07.275151386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal,Uid:f41d53b681fa153a4554189b7d8937c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a4c1fd1eb1c897af051146b7926a4164b740f95f0a3f172b6444d717505f461\"" Dec 16 03:26:07.278486 kubelet[2480]: E1216 03:26:07.278428 2480 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-21291" Dec 16 03:26:07.284087 containerd[1591]: time="2025-12-16T03:26:07.284038357Z" level=info msg="CreateContainer within sandbox \"5a4c1fd1eb1c897af051146b7926a4164b740f95f0a3f172b6444d717505f461\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 03:26:07.289943 containerd[1591]: time="2025-12-16T03:26:07.288707452Z" level=info msg="Container 9af3dac9a81c382fac094c2ab3d7410426d35af9bcb7b8428d13e101497bf4f7: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:26:07.293000 audit: BPF prog-id=101 op=LOAD Dec 16 03:26:07.294000 audit: BPF prog-id=102 op=LOAD Dec 16 03:26:07.294000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2523 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636356138313461386334316166383364396432376331363862336330 Dec 16 03:26:07.294000 audit: BPF prog-id=102 op=UNLOAD Dec 16 03:26:07.294000 audit[2638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2523 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636356138313461386334316166383364396432376331363862336330 Dec 16 03:26:07.294000 audit: BPF prog-id=103 op=LOAD Dec 16 03:26:07.294000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2523 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636356138313461386334316166383364396432376331363862336330 Dec 16 03:26:07.294000 audit: BPF prog-id=104 op=LOAD Dec 16 03:26:07.294000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2523 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636356138313461386334316166383364396432376331363862336330 Dec 16 03:26:07.295000 audit: BPF prog-id=104 op=UNLOAD Dec 16 03:26:07.295000 audit[2638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2523 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636356138313461386334316166383364396432376331363862336330 Dec 16 03:26:07.295000 audit: BPF prog-id=103 op=UNLOAD Dec 16 03:26:07.295000 audit[2638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2523 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636356138313461386334316166383364396432376331363862336330 Dec 16 03:26:07.295000 audit: BPF prog-id=105 op=LOAD Dec 16 03:26:07.295000 audit[2638]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2523 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636356138313461386334316166383364396432376331363862336330 Dec 16 03:26:07.300380 containerd[1591]: time="2025-12-16T03:26:07.300017572Z" level=info msg="Container b77c2c555a8aa4bda966ec4d9ddad470f6155fcbd18383a70f1ee463f02572ad: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:26:07.309549 containerd[1591]: time="2025-12-16T03:26:07.306629150Z" level=info msg="CreateContainer within sandbox \"d6e7119099274491a59c119acb8aa714a512d613a76e8e377ce051bfa447cd62\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9af3dac9a81c382fac094c2ab3d7410426d35af9bcb7b8428d13e101497bf4f7\"" Dec 16 03:26:07.309549 containerd[1591]: time="2025-12-16T03:26:07.308504576Z" level=info msg="CreateContainer within sandbox \"5a4c1fd1eb1c897af051146b7926a4164b740f95f0a3f172b6444d717505f461\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b77c2c555a8aa4bda966ec4d9ddad470f6155fcbd18383a70f1ee463f02572ad\"" Dec 16 03:26:07.310453 containerd[1591]: time="2025-12-16T03:26:07.310386748Z" level=info msg="StartContainer for \"b77c2c555a8aa4bda966ec4d9ddad470f6155fcbd18383a70f1ee463f02572ad\"" Dec 16 03:26:07.312123 containerd[1591]: time="2025-12-16T03:26:07.312088587Z" level=info msg="StartContainer for \"9af3dac9a81c382fac094c2ab3d7410426d35af9bcb7b8428d13e101497bf4f7\"" Dec 16 03:26:07.313682 containerd[1591]: time="2025-12-16T03:26:07.313632466Z" level=info msg="connecting to shim 9af3dac9a81c382fac094c2ab3d7410426d35af9bcb7b8428d13e101497bf4f7" address="unix:///run/containerd/s/2fa53f50b1a17cf17024bb348e88e41e2d5c4bb914ebe2f8b7e1419237e1b40a" protocol=ttrpc version=3 Dec 16 03:26:07.315708 containerd[1591]: time="2025-12-16T03:26:07.313924503Z" level=info msg="connecting to shim b77c2c555a8aa4bda966ec4d9ddad470f6155fcbd18383a70f1ee463f02572ad" address="unix:///run/containerd/s/ccd3b597b0be7d8e3947d266cade3314a0a591dcae933f4b1e7cb1d5f0af5b5a" protocol=ttrpc version=3 Dec 16 03:26:07.365317 systemd[1]: Started cri-containerd-9af3dac9a81c382fac094c2ab3d7410426d35af9bcb7b8428d13e101497bf4f7.scope - libcontainer container 9af3dac9a81c382fac094c2ab3d7410426d35af9bcb7b8428d13e101497bf4f7. Dec 16 03:26:07.368347 systemd[1]: Started cri-containerd-b77c2c555a8aa4bda966ec4d9ddad470f6155fcbd18383a70f1ee463f02572ad.scope - libcontainer container b77c2c555a8aa4bda966ec4d9ddad470f6155fcbd18383a70f1ee463f02572ad. Dec 16 03:26:07.408000 audit: BPF prog-id=106 op=LOAD Dec 16 03:26:07.414000 audit: BPF prog-id=107 op=LOAD Dec 16 03:26:07.414000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2555 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961663364616339613831633338326661633039346332616233643734 Dec 16 03:26:07.414000 audit: BPF prog-id=107 op=UNLOAD Dec 16 03:26:07.414000 audit[2675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2555 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961663364616339613831633338326661633039346332616233643734 Dec 16 03:26:07.416000 audit: BPF prog-id=108 op=LOAD Dec 16 03:26:07.416000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2555 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961663364616339613831633338326661633039346332616233643734 Dec 16 03:26:07.416000 audit: BPF prog-id=109 op=LOAD Dec 16 03:26:07.416000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2555 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961663364616339613831633338326661633039346332616233643734 Dec 16 03:26:07.416000 audit: BPF prog-id=109 op=UNLOAD Dec 16 03:26:07.416000 audit[2675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2555 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961663364616339613831633338326661633039346332616233643734 Dec 16 03:26:07.416000 audit: BPF prog-id=108 op=UNLOAD Dec 16 03:26:07.416000 audit[2675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2555 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961663364616339613831633338326661633039346332616233643734 Dec 16 03:26:07.416000 audit: BPF prog-id=110 op=LOAD Dec 16 03:26:07.416000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2555 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961663364616339613831633338326661633039346332616233643734 Dec 16 03:26:07.417000 audit: BPF prog-id=111 op=LOAD Dec 16 03:26:07.419000 audit: BPF prog-id=112 op=LOAD Dec 16 03:26:07.419000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2575 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237376332633535356138616134626461393636656334643964646164 Dec 16 03:26:07.419000 audit: BPF prog-id=112 op=UNLOAD Dec 16 03:26:07.419000 audit[2674]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237376332633535356138616134626461393636656334643964646164 Dec 16 03:26:07.421000 audit: BPF prog-id=113 op=LOAD Dec 16 03:26:07.421000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2575 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237376332633535356138616134626461393636656334643964646164 Dec 16 03:26:07.421000 audit: BPF prog-id=114 op=LOAD Dec 16 03:26:07.421000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2575 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237376332633535356138616134626461393636656334643964646164 Dec 16 03:26:07.421000 audit: BPF prog-id=114 op=UNLOAD Dec 16 03:26:07.421000 audit[2674]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237376332633535356138616134626461393636656334643964646164 Dec 16 03:26:07.421000 audit: BPF prog-id=113 op=UNLOAD Dec 16 03:26:07.421000 audit[2674]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237376332633535356138616134626461393636656334643964646164 Dec 16 03:26:07.421000 audit: BPF prog-id=115 op=LOAD Dec 16 03:26:07.421000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2575 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:07.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237376332633535356138616134626461393636656334643964646164 Dec 16 03:26:07.424700 containerd[1591]: time="2025-12-16T03:26:07.424629222Z" level=info msg="StartContainer for \"f65a814a8c41af83d9d27c168b3c0098e00cb1844593c38f32e5a57a1854169d\" returns successfully" Dec 16 03:26:07.501936 containerd[1591]: time="2025-12-16T03:26:07.501456529Z" level=info msg="StartContainer for \"9af3dac9a81c382fac094c2ab3d7410426d35af9bcb7b8428d13e101497bf4f7\" returns successfully" Dec 16 03:26:07.566005 containerd[1591]: time="2025-12-16T03:26:07.564767849Z" level=info msg="StartContainer for \"b77c2c555a8aa4bda966ec4d9ddad470f6155fcbd18383a70f1ee463f02572ad\" returns successfully" Dec 16 03:26:07.928594 kubelet[2480]: I1216 03:26:07.928553 2480 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:08.399390 kubelet[2480]: E1216 03:26:08.399347 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:08.401935 kubelet[2480]: E1216 03:26:08.401048 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:08.407525 kubelet[2480]: E1216 03:26:08.407492 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:09.411080 kubelet[2480]: E1216 03:26:09.411035 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:09.411621 kubelet[2480]: E1216 03:26:09.411577 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:09.414150 kubelet[2480]: E1216 03:26:09.414118 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:10.413661 kubelet[2480]: E1216 03:26:10.413613 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:10.414351 kubelet[2480]: E1216 03:26:10.414183 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:10.414670 kubelet[2480]: E1216 03:26:10.414639 2480 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" not found" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:10.596513 kubelet[2480]: I1216 03:26:10.596441 2480 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:10.596513 kubelet[2480]: E1216 03:26:10.596511 2480 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\": node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" not found" Dec 16 03:26:10.603531 kubelet[2480]: I1216 03:26:10.603489 2480 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:10.675978 kubelet[2480]: E1216 03:26:10.675815 2480 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" Dec 16 03:26:10.687951 kubelet[2480]: E1216 03:26:10.687659 2480 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:10.687951 kubelet[2480]: I1216 03:26:10.687702 2480 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:10.691200 kubelet[2480]: E1216 03:26:10.691151 2480 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:10.691446 kubelet[2480]: I1216 03:26:10.691323 2480 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:10.693728 kubelet[2480]: E1216 03:26:10.693682 2480 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:11.272379 kubelet[2480]: I1216 03:26:11.272312 2480 apiserver.go:52] "Watching apiserver" Dec 16 03:26:11.303681 kubelet[2480]: I1216 03:26:11.303621 2480 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:26:11.412794 kubelet[2480]: I1216 03:26:11.412751 2480 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:11.423879 kubelet[2480]: I1216 03:26:11.423840 2480 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 03:26:12.723121 systemd[1]: Reload requested from client PID 2759 ('systemctl') (unit session-10.scope)... Dec 16 03:26:12.723165 systemd[1]: Reloading... Dec 16 03:26:12.912946 zram_generator::config[2804]: No configuration found. Dec 16 03:26:13.275941 systemd[1]: Reloading finished in 551 ms. Dec 16 03:26:13.316625 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:26:13.341482 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 03:26:13.342090 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:26:13.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:13.343036 systemd[1]: kubelet.service: Consumed 1.056s CPU time, 130.5M memory peak. Dec 16 03:26:13.369861 kernel: kauditd_printk_skb: 122 callbacks suppressed Dec 16 03:26:13.370015 kernel: audit: type=1131 audit(1765855573.342:381): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:13.349374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:26:13.347000 audit: BPF prog-id=116 op=LOAD Dec 16 03:26:13.347000 audit: BPF prog-id=73 op=UNLOAD Dec 16 03:26:13.384547 kernel: audit: type=1334 audit(1765855573.347:382): prog-id=116 op=LOAD Dec 16 03:26:13.384642 kernel: audit: type=1334 audit(1765855573.347:383): prog-id=73 op=UNLOAD Dec 16 03:26:13.384686 kernel: audit: type=1334 audit(1765855573.347:384): prog-id=117 op=LOAD Dec 16 03:26:13.347000 audit: BPF prog-id=117 op=LOAD Dec 16 03:26:13.399938 kernel: audit: type=1334 audit(1765855573.347:385): prog-id=118 op=LOAD Dec 16 03:26:13.347000 audit: BPF prog-id=118 op=LOAD Dec 16 03:26:13.347000 audit: BPF prog-id=74 op=UNLOAD Dec 16 03:26:13.409082 kernel: audit: type=1334 audit(1765855573.347:386): prog-id=74 op=UNLOAD Dec 16 03:26:13.347000 audit: BPF prog-id=75 op=UNLOAD Dec 16 03:26:13.353000 audit: BPF prog-id=119 op=LOAD Dec 16 03:26:13.423500 kernel: audit: type=1334 audit(1765855573.347:387): prog-id=75 op=UNLOAD Dec 16 03:26:13.423588 kernel: audit: type=1334 audit(1765855573.353:388): prog-id=119 op=LOAD Dec 16 03:26:13.423631 kernel: audit: type=1334 audit(1765855573.353:389): prog-id=83 op=UNLOAD Dec 16 03:26:13.353000 audit: BPF prog-id=83 op=UNLOAD Dec 16 03:26:13.353000 audit: BPF prog-id=120 op=LOAD Dec 16 03:26:13.437784 kernel: audit: type=1334 audit(1765855573.353:390): prog-id=120 op=LOAD Dec 16 03:26:13.353000 audit: BPF prog-id=121 op=LOAD Dec 16 03:26:13.353000 audit: BPF prog-id=84 op=UNLOAD Dec 16 03:26:13.353000 audit: BPF prog-id=85 op=UNLOAD Dec 16 03:26:13.358000 audit: BPF prog-id=122 op=LOAD Dec 16 03:26:13.358000 audit: BPF prog-id=66 op=UNLOAD Dec 16 03:26:13.358000 audit: BPF prog-id=123 op=LOAD Dec 16 03:26:13.358000 audit: BPF prog-id=124 op=LOAD Dec 16 03:26:13.358000 audit: BPF prog-id=67 op=UNLOAD Dec 16 03:26:13.358000 audit: BPF prog-id=68 op=UNLOAD Dec 16 03:26:13.363000 audit: BPF prog-id=125 op=LOAD Dec 16 03:26:13.363000 audit: BPF prog-id=82 op=UNLOAD Dec 16 03:26:13.363000 audit: BPF prog-id=126 op=LOAD Dec 16 03:26:13.363000 audit: BPF prog-id=72 op=UNLOAD Dec 16 03:26:13.363000 audit: BPF prog-id=127 op=LOAD Dec 16 03:26:13.363000 audit: BPF prog-id=76 op=UNLOAD Dec 16 03:26:13.363000 audit: BPF prog-id=128 op=LOAD Dec 16 03:26:13.363000 audit: BPF prog-id=129 op=LOAD Dec 16 03:26:13.363000 audit: BPF prog-id=77 op=UNLOAD Dec 16 03:26:13.363000 audit: BPF prog-id=78 op=UNLOAD Dec 16 03:26:13.368000 audit: BPF prog-id=130 op=LOAD Dec 16 03:26:13.368000 audit: BPF prog-id=69 op=UNLOAD Dec 16 03:26:13.369000 audit: BPF prog-id=131 op=LOAD Dec 16 03:26:13.369000 audit: BPF prog-id=79 op=UNLOAD Dec 16 03:26:13.369000 audit: BPF prog-id=132 op=LOAD Dec 16 03:26:13.390000 audit: BPF prog-id=133 op=LOAD Dec 16 03:26:13.390000 audit: BPF prog-id=80 op=UNLOAD Dec 16 03:26:13.390000 audit: BPF prog-id=81 op=UNLOAD Dec 16 03:26:13.390000 audit: BPF prog-id=134 op=LOAD Dec 16 03:26:13.390000 audit: BPF prog-id=135 op=LOAD Dec 16 03:26:13.390000 audit: BPF prog-id=70 op=UNLOAD Dec 16 03:26:13.390000 audit: BPF prog-id=71 op=UNLOAD Dec 16 03:26:13.788969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:26:13.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:13.803410 (kubelet)[2854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:26:13.879773 kubelet[2854]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:26:13.879773 kubelet[2854]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:26:13.879773 kubelet[2854]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:26:13.881077 kubelet[2854]: I1216 03:26:13.879864 2854 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:26:13.893830 kubelet[2854]: I1216 03:26:13.893601 2854 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 03:26:13.893830 kubelet[2854]: I1216 03:26:13.893666 2854 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:26:13.894543 kubelet[2854]: I1216 03:26:13.894514 2854 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 03:26:13.900938 kubelet[2854]: I1216 03:26:13.898538 2854 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 03:26:13.902385 kubelet[2854]: I1216 03:26:13.902344 2854 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:26:13.912546 kubelet[2854]: I1216 03:26:13.912520 2854 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:26:13.925502 kubelet[2854]: I1216 03:26:13.925474 2854 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:26:13.925817 kubelet[2854]: I1216 03:26:13.925772 2854 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:26:13.927935 kubelet[2854]: I1216 03:26:13.925822 2854 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:26:13.928153 kubelet[2854]: I1216 03:26:13.927942 2854 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:26:13.928153 kubelet[2854]: I1216 03:26:13.927962 2854 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 03:26:13.928153 kubelet[2854]: I1216 03:26:13.928023 2854 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:26:13.928806 kubelet[2854]: I1216 03:26:13.928756 2854 kubelet.go:480] "Attempting to sync node with API server" Dec 16 03:26:13.929873 kubelet[2854]: I1216 03:26:13.929808 2854 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:26:13.929873 kubelet[2854]: I1216 03:26:13.929853 2854 kubelet.go:386] "Adding apiserver pod source" Dec 16 03:26:13.929873 kubelet[2854]: I1216 03:26:13.929875 2854 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:26:13.939034 kubelet[2854]: I1216 03:26:13.939009 2854 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:26:13.941560 kubelet[2854]: I1216 03:26:13.940804 2854 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 03:26:13.992268 kubelet[2854]: I1216 03:26:13.992242 2854 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:26:13.993058 kubelet[2854]: I1216 03:26:13.992555 2854 server.go:1289] "Started kubelet" Dec 16 03:26:14.000070 kubelet[2854]: I1216 03:26:13.999732 2854 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:26:14.001325 kubelet[2854]: I1216 03:26:14.000349 2854 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:26:14.005514 kubelet[2854]: I1216 03:26:14.005495 2854 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:26:14.007212 kubelet[2854]: I1216 03:26:14.007184 2854 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:26:14.010319 kubelet[2854]: I1216 03:26:14.010299 2854 server.go:317] "Adding debug handlers to kubelet server" Dec 16 03:26:14.012552 kubelet[2854]: I1216 03:26:14.012534 2854 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:26:14.013106 kubelet[2854]: I1216 03:26:14.013081 2854 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:26:14.015522 kubelet[2854]: I1216 03:26:14.014792 2854 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:26:14.017693 kubelet[2854]: I1216 03:26:14.017671 2854 factory.go:223] Registration of the systemd container factory successfully Dec 16 03:26:14.018042 kubelet[2854]: I1216 03:26:14.018009 2854 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:26:14.019168 kubelet[2854]: E1216 03:26:14.019145 2854 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:26:14.021070 kubelet[2854]: I1216 03:26:14.018069 2854 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:26:14.027726 kubelet[2854]: I1216 03:26:14.026775 2854 factory.go:223] Registration of the containerd container factory successfully Dec 16 03:26:14.065947 kubelet[2854]: I1216 03:26:14.064613 2854 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 03:26:14.074054 kubelet[2854]: I1216 03:26:14.074005 2854 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 03:26:14.074054 kubelet[2854]: I1216 03:26:14.074033 2854 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 03:26:14.074054 kubelet[2854]: I1216 03:26:14.074058 2854 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:26:14.074359 kubelet[2854]: I1216 03:26:14.074069 2854 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 03:26:14.074359 kubelet[2854]: E1216 03:26:14.074141 2854 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:26:14.134590 kubelet[2854]: I1216 03:26:14.134542 2854 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:26:14.134590 kubelet[2854]: I1216 03:26:14.134566 2854 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:26:14.134590 kubelet[2854]: I1216 03:26:14.134593 2854 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:26:14.135081 kubelet[2854]: I1216 03:26:14.134767 2854 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 03:26:14.135081 kubelet[2854]: I1216 03:26:14.134784 2854 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 03:26:14.135081 kubelet[2854]: I1216 03:26:14.134809 2854 policy_none.go:49] "None policy: Start" Dec 16 03:26:14.135081 kubelet[2854]: I1216 03:26:14.134824 2854 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:26:14.135081 kubelet[2854]: I1216 03:26:14.134839 2854 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:26:14.135532 kubelet[2854]: I1216 03:26:14.135246 2854 state_mem.go:75] "Updated machine memory state" Dec 16 03:26:14.150408 kubelet[2854]: E1216 03:26:14.150364 2854 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 03:26:14.151278 kubelet[2854]: I1216 03:26:14.151202 2854 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:26:14.151278 kubelet[2854]: I1216 03:26:14.151222 2854 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:26:14.153391 kubelet[2854]: I1216 03:26:14.152842 2854 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:26:14.154474 kubelet[2854]: E1216 03:26:14.154450 2854 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:26:14.176949 kubelet[2854]: I1216 03:26:14.176041 2854 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.176949 kubelet[2854]: I1216 03:26:14.176781 2854 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.178283 kubelet[2854]: I1216 03:26:14.177595 2854 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.187933 kubelet[2854]: I1216 03:26:14.187577 2854 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 03:26:14.190794 kubelet[2854]: I1216 03:26:14.190761 2854 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 03:26:14.193754 kubelet[2854]: I1216 03:26:14.193656 2854 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 03:26:14.194352 kubelet[2854]: E1216 03:26:14.194258 2854 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.268460 kubelet[2854]: I1216 03:26:14.267964 2854 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.279973 kubelet[2854]: I1216 03:26:14.279941 2854 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.280790 kubelet[2854]: I1216 03:26:14.280049 2854 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.318163 kubelet[2854]: I1216 03:26:14.317102 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/557ca91ed163291f4d955f24a74d1609-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"557ca91ed163291f4d955f24a74d1609\") " pod="kube-system/kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.318163 kubelet[2854]: I1216 03:26:14.317162 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1239615b4819b2376024252d6d8fc954-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"1239615b4819b2376024252d6d8fc954\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.318163 kubelet[2854]: I1216 03:26:14.317195 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1239615b4819b2376024252d6d8fc954-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"1239615b4819b2376024252d6d8fc954\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.318163 kubelet[2854]: I1216 03:26:14.317228 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/557ca91ed163291f4d955f24a74d1609-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"557ca91ed163291f4d955f24a74d1609\") " pod="kube-system/kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.318444 kubelet[2854]: I1216 03:26:14.317263 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/557ca91ed163291f4d955f24a74d1609-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"557ca91ed163291f4d955f24a74d1609\") " pod="kube-system/kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.318444 kubelet[2854]: I1216 03:26:14.317323 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1239615b4819b2376024252d6d8fc954-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"1239615b4819b2376024252d6d8fc954\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.318444 kubelet[2854]: I1216 03:26:14.317366 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1239615b4819b2376024252d6d8fc954-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"1239615b4819b2376024252d6d8fc954\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.318444 kubelet[2854]: I1216 03:26:14.317395 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1239615b4819b2376024252d6d8fc954-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"1239615b4819b2376024252d6d8fc954\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.318642 kubelet[2854]: I1216 03:26:14.317424 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f41d53b681fa153a4554189b7d8937c3-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" (UID: \"f41d53b681fa153a4554189b7d8937c3\") " pod="kube-system/kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:14.935309 kubelet[2854]: I1216 03:26:14.935145 2854 apiserver.go:52] "Watching apiserver" Dec 16 03:26:15.021613 kubelet[2854]: I1216 03:26:15.021502 2854 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:26:15.116507 kubelet[2854]: I1216 03:26:15.115590 2854 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:15.118300 kubelet[2854]: I1216 03:26:15.118269 2854 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:15.130938 kubelet[2854]: I1216 03:26:15.129050 2854 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 03:26:15.130938 kubelet[2854]: E1216 03:26:15.129135 2854 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:15.138189 kubelet[2854]: I1216 03:26:15.137895 2854 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Dec 16 03:26:15.138189 kubelet[2854]: E1216 03:26:15.137990 2854 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:15.180244 kubelet[2854]: I1216 03:26:15.180174 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" podStartSLOduration=1.180152726 podStartE2EDuration="1.180152726s" podCreationTimestamp="2025-12-16 03:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:26:15.163123596 +0000 UTC m=+1.353324453" watchObservedRunningTime="2025-12-16 03:26:15.180152726 +0000 UTC m=+1.370353581" Dec 16 03:26:15.182290 kubelet[2854]: I1216 03:26:15.182212 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" podStartSLOduration=1.182197588 podStartE2EDuration="1.182197588s" podCreationTimestamp="2025-12-16 03:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:26:15.180042027 +0000 UTC m=+1.370242888" watchObservedRunningTime="2025-12-16 03:26:15.182197588 +0000 UTC m=+1.372398450" Dec 16 03:26:15.213433 kubelet[2854]: I1216 03:26:15.213249 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" podStartSLOduration=4.213232245 podStartE2EDuration="4.213232245s" podCreationTimestamp="2025-12-16 03:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:26:15.201296907 +0000 UTC m=+1.391497768" watchObservedRunningTime="2025-12-16 03:26:15.213232245 +0000 UTC m=+1.403433106" Dec 16 03:26:16.340621 update_engine[1567]: I20251216 03:26:16.340535 1567 update_attempter.cc:509] Updating boot flags... Dec 16 03:26:17.407171 kubelet[2854]: I1216 03:26:17.406900 2854 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 03:26:17.409532 kubelet[2854]: I1216 03:26:17.408620 2854 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 03:26:17.409604 containerd[1591]: time="2025-12-16T03:26:17.407755475Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 03:26:18.380967 systemd[1]: Created slice kubepods-besteffort-pod720b8442_2ce3_4d8f_94cb_622b7c0952a3.slice - libcontainer container kubepods-besteffort-pod720b8442_2ce3_4d8f_94cb_622b7c0952a3.slice. Dec 16 03:26:18.546129 kubelet[2854]: I1216 03:26:18.546064 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/720b8442-2ce3-4d8f-94cb-622b7c0952a3-xtables-lock\") pod \"kube-proxy-sdr8t\" (UID: \"720b8442-2ce3-4d8f-94cb-622b7c0952a3\") " pod="kube-system/kube-proxy-sdr8t" Dec 16 03:26:18.546129 kubelet[2854]: I1216 03:26:18.546127 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/720b8442-2ce3-4d8f-94cb-622b7c0952a3-kube-proxy\") pod \"kube-proxy-sdr8t\" (UID: \"720b8442-2ce3-4d8f-94cb-622b7c0952a3\") " pod="kube-system/kube-proxy-sdr8t" Dec 16 03:26:18.546663 kubelet[2854]: I1216 03:26:18.546153 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/720b8442-2ce3-4d8f-94cb-622b7c0952a3-lib-modules\") pod \"kube-proxy-sdr8t\" (UID: \"720b8442-2ce3-4d8f-94cb-622b7c0952a3\") " pod="kube-system/kube-proxy-sdr8t" Dec 16 03:26:18.546663 kubelet[2854]: I1216 03:26:18.546176 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdm2s\" (UniqueName: \"kubernetes.io/projected/720b8442-2ce3-4d8f-94cb-622b7c0952a3-kube-api-access-bdm2s\") pod \"kube-proxy-sdr8t\" (UID: \"720b8442-2ce3-4d8f-94cb-622b7c0952a3\") " pod="kube-system/kube-proxy-sdr8t" Dec 16 03:26:18.663190 systemd[1]: Created slice kubepods-besteffort-poddb4ba4f0_9d7b_4d47_8110_5d8a40fff3cb.slice - libcontainer container kubepods-besteffort-poddb4ba4f0_9d7b_4d47_8110_5d8a40fff3cb.slice. Dec 16 03:26:18.695034 containerd[1591]: time="2025-12-16T03:26:18.694979763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sdr8t,Uid:720b8442-2ce3-4d8f-94cb-622b7c0952a3,Namespace:kube-system,Attempt:0,}" Dec 16 03:26:18.724730 containerd[1591]: time="2025-12-16T03:26:18.724618909Z" level=info msg="connecting to shim 6006f2e867b5edf02812b8d0b8f748480b67c65fa09761c4e0c7d20706652cb6" address="unix:///run/containerd/s/dbece3041b37fb48e2775225124275a606156867345b37f0aff935da1912567e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:26:18.749318 kubelet[2854]: I1216 03:26:18.749273 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmtz7\" (UniqueName: \"kubernetes.io/projected/db4ba4f0-9d7b-4d47-8110-5d8a40fff3cb-kube-api-access-vmtz7\") pod \"tigera-operator-7dcd859c48-p5596\" (UID: \"db4ba4f0-9d7b-4d47-8110-5d8a40fff3cb\") " pod="tigera-operator/tigera-operator-7dcd859c48-p5596" Dec 16 03:26:18.749457 kubelet[2854]: I1216 03:26:18.749330 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/db4ba4f0-9d7b-4d47-8110-5d8a40fff3cb-var-lib-calico\") pod \"tigera-operator-7dcd859c48-p5596\" (UID: \"db4ba4f0-9d7b-4d47-8110-5d8a40fff3cb\") " pod="tigera-operator/tigera-operator-7dcd859c48-p5596" Dec 16 03:26:18.762169 systemd[1]: Started cri-containerd-6006f2e867b5edf02812b8d0b8f748480b67c65fa09761c4e0c7d20706652cb6.scope - libcontainer container 6006f2e867b5edf02812b8d0b8f748480b67c65fa09761c4e0c7d20706652cb6. Dec 16 03:26:18.776000 audit: BPF prog-id=136 op=LOAD Dec 16 03:26:18.783109 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 03:26:18.783204 kernel: audit: type=1334 audit(1765855578.776:423): prog-id=136 op=LOAD Dec 16 03:26:18.779000 audit: BPF prog-id=137 op=LOAD Dec 16 03:26:18.797420 kernel: audit: type=1334 audit(1765855578.779:424): prog-id=137 op=LOAD Dec 16 03:26:18.798097 kernel: audit: type=1300 audit(1765855578.779:424): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:18.779000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:18.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630303666326538363762356564663032383132623864306238663734 Dec 16 03:26:18.855804 kernel: audit: type=1327 audit(1765855578.779:424): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630303666326538363762356564663032383132623864306238663734 Dec 16 03:26:18.857151 kernel: audit: type=1334 audit(1765855578.779:425): prog-id=137 op=UNLOAD Dec 16 03:26:18.779000 audit: BPF prog-id=137 op=UNLOAD Dec 16 03:26:18.779000 audit[2945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:18.891831 kernel: audit: type=1300 audit(1765855578.779:425): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:18.892280 kernel: audit: type=1327 audit(1765855578.779:425): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630303666326538363762356564663032383132623864306238663734 Dec 16 03:26:18.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630303666326538363762356564663032383132623864306238663734 Dec 16 03:26:18.924977 kernel: audit: type=1334 audit(1765855578.779:426): prog-id=138 op=LOAD Dec 16 03:26:18.779000 audit: BPF prog-id=138 op=LOAD Dec 16 03:26:18.926159 containerd[1591]: time="2025-12-16T03:26:18.921957040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sdr8t,Uid:720b8442-2ce3-4d8f-94cb-622b7c0952a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6006f2e867b5edf02812b8d0b8f748480b67c65fa09761c4e0c7d20706652cb6\"" Dec 16 03:26:18.779000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:18.940979 containerd[1591]: time="2025-12-16T03:26:18.940940927Z" level=info msg="CreateContainer within sandbox \"6006f2e867b5edf02812b8d0b8f748480b67c65fa09761c4e0c7d20706652cb6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 03:26:18.957599 kernel: audit: type=1300 audit(1765855578.779:426): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:18.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630303666326538363762356564663032383132623864306238663734 Dec 16 03:26:18.961382 containerd[1591]: time="2025-12-16T03:26:18.960428642Z" level=info msg="Container 12b9ef12e28a4e625b258a90a449a477e539d84c0f6442ee1809f0eba8185a86: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:26:18.972498 containerd[1591]: time="2025-12-16T03:26:18.972457789Z" level=info msg="CreateContainer within sandbox \"6006f2e867b5edf02812b8d0b8f748480b67c65fa09761c4e0c7d20706652cb6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"12b9ef12e28a4e625b258a90a449a477e539d84c0f6442ee1809f0eba8185a86\"" Dec 16 03:26:18.974650 containerd[1591]: time="2025-12-16T03:26:18.974621102Z" level=info msg="StartContainer for \"12b9ef12e28a4e625b258a90a449a477e539d84c0f6442ee1809f0eba8185a86\"" Dec 16 03:26:18.978281 containerd[1591]: time="2025-12-16T03:26:18.978107462Z" level=info msg="connecting to shim 12b9ef12e28a4e625b258a90a449a477e539d84c0f6442ee1809f0eba8185a86" address="unix:///run/containerd/s/dbece3041b37fb48e2775225124275a606156867345b37f0aff935da1912567e" protocol=ttrpc version=3 Dec 16 03:26:18.779000 audit: BPF prog-id=139 op=LOAD Dec 16 03:26:18.779000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:18.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630303666326538363762356564663032383132623864306238663734 Dec 16 03:26:18.990198 kernel: audit: type=1327 audit(1765855578.779:426): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630303666326538363762356564663032383132623864306238663734 Dec 16 03:26:18.779000 audit: BPF prog-id=139 op=UNLOAD Dec 16 03:26:18.779000 audit[2945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:18.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630303666326538363762356564663032383132623864306238663734 Dec 16 03:26:18.779000 audit: BPF prog-id=138 op=UNLOAD Dec 16 03:26:18.779000 audit[2945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:18.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630303666326538363762356564663032383132623864306238663734 Dec 16 03:26:18.779000 audit: BPF prog-id=140 op=LOAD Dec 16 03:26:18.779000 audit[2945]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2934 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:18.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630303666326538363762356564663032383132623864306238663734 Dec 16 03:26:18.992022 containerd[1591]: time="2025-12-16T03:26:18.991014485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p5596,Uid:db4ba4f0-9d7b-4d47-8110-5d8a40fff3cb,Namespace:tigera-operator,Attempt:0,}" Dec 16 03:26:19.024448 containerd[1591]: time="2025-12-16T03:26:19.024392034Z" level=info msg="connecting to shim 1d80f5ef4ef56f74117e4284571639876fb8ccd03d367ed66b4965a57a798ff7" address="unix:///run/containerd/s/3b0a61ba1285e02ac33182c260a9bc6a3b48ef645b071c6f8a3ba3ef593bfbb3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:26:19.027201 systemd[1]: Started cri-containerd-12b9ef12e28a4e625b258a90a449a477e539d84c0f6442ee1809f0eba8185a86.scope - libcontainer container 12b9ef12e28a4e625b258a90a449a477e539d84c0f6442ee1809f0eba8185a86. Dec 16 03:26:19.077193 systemd[1]: Started cri-containerd-1d80f5ef4ef56f74117e4284571639876fb8ccd03d367ed66b4965a57a798ff7.scope - libcontainer container 1d80f5ef4ef56f74117e4284571639876fb8ccd03d367ed66b4965a57a798ff7. Dec 16 03:26:19.091000 audit: BPF prog-id=141 op=LOAD Dec 16 03:26:19.091000 audit[2971]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2934 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132623965663132653238613465363235623235386139306134343961 Dec 16 03:26:19.091000 audit: BPF prog-id=142 op=LOAD Dec 16 03:26:19.091000 audit[2971]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2934 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132623965663132653238613465363235623235386139306134343961 Dec 16 03:26:19.091000 audit: BPF prog-id=142 op=UNLOAD Dec 16 03:26:19.091000 audit[2971]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132623965663132653238613465363235623235386139306134343961 Dec 16 03:26:19.091000 audit: BPF prog-id=141 op=UNLOAD Dec 16 03:26:19.091000 audit[2971]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2934 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132623965663132653238613465363235623235386139306134343961 Dec 16 03:26:19.091000 audit: BPF prog-id=143 op=LOAD Dec 16 03:26:19.091000 audit[2971]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2934 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132623965663132653238613465363235623235386139306134343961 Dec 16 03:26:19.102000 audit: BPF prog-id=144 op=LOAD Dec 16 03:26:19.105000 audit: BPF prog-id=145 op=LOAD Dec 16 03:26:19.105000 audit[3009]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2992 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164383066356566346566353666373431313765343238343537313633 Dec 16 03:26:19.105000 audit: BPF prog-id=145 op=UNLOAD Dec 16 03:26:19.105000 audit[3009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164383066356566346566353666373431313765343238343537313633 Dec 16 03:26:19.108000 audit: BPF prog-id=146 op=LOAD Dec 16 03:26:19.108000 audit[3009]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2992 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164383066356566346566353666373431313765343238343537313633 Dec 16 03:26:19.108000 audit: BPF prog-id=147 op=LOAD Dec 16 03:26:19.108000 audit[3009]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2992 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164383066356566346566353666373431313765343238343537313633 Dec 16 03:26:19.108000 audit: BPF prog-id=147 op=UNLOAD Dec 16 03:26:19.108000 audit[3009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164383066356566346566353666373431313765343238343537313633 Dec 16 03:26:19.108000 audit: BPF prog-id=146 op=UNLOAD Dec 16 03:26:19.108000 audit[3009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164383066356566346566353666373431313765343238343537313633 Dec 16 03:26:19.108000 audit: BPF prog-id=148 op=LOAD Dec 16 03:26:19.108000 audit[3009]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2992 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164383066356566346566353666373431313765343238343537313633 Dec 16 03:26:19.140357 containerd[1591]: time="2025-12-16T03:26:19.140295641Z" level=info msg="StartContainer for \"12b9ef12e28a4e625b258a90a449a477e539d84c0f6442ee1809f0eba8185a86\" returns successfully" Dec 16 03:26:19.177993 containerd[1591]: time="2025-12-16T03:26:19.177771621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p5596,Uid:db4ba4f0-9d7b-4d47-8110-5d8a40fff3cb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1d80f5ef4ef56f74117e4284571639876fb8ccd03d367ed66b4965a57a798ff7\"" Dec 16 03:26:19.186723 containerd[1591]: time="2025-12-16T03:26:19.186551544Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 03:26:19.341000 audit[3081]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.341000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd0778790 a2=0 a3=7ffdd077877c items=0 ppid=2998 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:26:19.343000 audit[3083]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.343000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff40bda3f0 a2=0 a3=7fff40bda3dc items=0 ppid=2998 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.343000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:26:19.346000 audit[3085]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.346000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd34e788f0 a2=0 a3=7ffd34e788dc items=0 ppid=2998 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.346000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:26:19.348000 audit[3086]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.348000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffddb07e540 a2=0 a3=7ffddb07e52c items=0 ppid=2998 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:26:19.350000 audit[3087]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.350000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff48fd4a30 a2=0 a3=7fff48fd4a1c items=0 ppid=2998 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:26:19.352000 audit[3088]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.352000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd4733ef00 a2=0 a3=7ffd4733eeec items=0 ppid=2998 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.352000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:26:19.447000 audit[3089]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.447000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd98527160 a2=0 a3=7ffd9852714c items=0 ppid=2998 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.447000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:26:19.452000 audit[3091]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.452000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffed5297a20 a2=0 a3=7ffed5297a0c items=0 ppid=2998 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.452000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 03:26:19.458000 audit[3094]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.458000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffea3a9d320 a2=0 a3=7ffea3a9d30c items=0 ppid=2998 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.458000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 03:26:19.460000 audit[3095]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.460000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2d152ec0 a2=0 a3=7fff2d152eac items=0 ppid=2998 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.460000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:26:19.464000 audit[3097]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.464000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc58bed450 a2=0 a3=7ffc58bed43c items=0 ppid=2998 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.464000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:26:19.465000 audit[3098]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.465000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec709e8a0 a2=0 a3=7ffec709e88c items=0 ppid=2998 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.465000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:26:19.469000 audit[3100]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.469000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffed19399a0 a2=0 a3=7ffed193998c items=0 ppid=2998 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.469000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:26:19.476000 audit[3103]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.476000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc8d6c5410 a2=0 a3=7ffc8d6c53fc items=0 ppid=2998 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.476000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 03:26:19.479000 audit[3104]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.479000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0eab6120 a2=0 a3=7ffe0eab610c items=0 ppid=2998 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.479000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:26:19.485000 audit[3106]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.485000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd9cbc8180 a2=0 a3=7ffd9cbc816c items=0 ppid=2998 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.485000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:26:19.487000 audit[3107]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.487000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff685c2080 a2=0 a3=7fff685c206c items=0 ppid=2998 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.487000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:26:19.493000 audit[3109]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.493000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff4a5cfc60 a2=0 a3=7fff4a5cfc4c items=0 ppid=2998 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.493000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:26:19.499000 audit[3112]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.499000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff1bbea470 a2=0 a3=7fff1bbea45c items=0 ppid=2998 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.499000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:26:19.505000 audit[3115]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.505000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd59112860 a2=0 a3=7ffd5911284c items=0 ppid=2998 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.505000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:26:19.506000 audit[3116]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.506000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff5cc48690 a2=0 a3=7fff5cc4867c items=0 ppid=2998 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:26:19.510000 audit[3118]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.510000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcc0f92340 a2=0 a3=7ffcc0f9232c items=0 ppid=2998 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.510000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:26:19.516000 audit[3121]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.516000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffe09435c0 a2=0 a3=7fffe09435ac items=0 ppid=2998 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:26:19.518000 audit[3122]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.518000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4d81df60 a2=0 a3=7ffd4d81df4c items=0 ppid=2998 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.518000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:26:19.523000 audit[3124]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:26:19.523000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcb8d89200 a2=0 a3=7ffcb8d891ec items=0 ppid=2998 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.523000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:26:19.554000 audit[3130]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:19.554000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe8af7a260 a2=0 a3=7ffe8af7a24c items=0 ppid=2998 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.554000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:19.566000 audit[3130]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:19.566000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe8af7a260 a2=0 a3=7ffe8af7a24c items=0 ppid=2998 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.566000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:19.569000 audit[3135]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3135 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.569000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe8fa72d30 a2=0 a3=7ffe8fa72d1c items=0 ppid=2998 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.569000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:26:19.573000 audit[3137]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3137 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.573000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc6a734a60 a2=0 a3=7ffc6a734a4c items=0 ppid=2998 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.573000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 03:26:19.579000 audit[3140]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.579000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffff74ae7c0 a2=0 a3=7ffff74ae7ac items=0 ppid=2998 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.579000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 03:26:19.581000 audit[3141]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.581000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9dfa8820 a2=0 a3=7ffd9dfa880c items=0 ppid=2998 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.581000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:26:19.585000 audit[3143]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.585000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffedaba91b0 a2=0 a3=7ffedaba919c items=0 ppid=2998 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:26:19.587000 audit[3144]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.587000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb7c52da0 a2=0 a3=7ffcb7c52d8c items=0 ppid=2998 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:26:19.593000 audit[3146]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.593000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd25e82360 a2=0 a3=7ffd25e8234c items=0 ppid=2998 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.593000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 03:26:19.600000 audit[3149]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.600000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff851ae550 a2=0 a3=7fff851ae53c items=0 ppid=2998 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:26:19.602000 audit[3150]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.602000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc94becb70 a2=0 a3=7ffc94becb5c items=0 ppid=2998 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.602000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:26:19.607000 audit[3152]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.607000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd3f0cbf10 a2=0 a3=7ffd3f0cbefc items=0 ppid=2998 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.607000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:26:19.610000 audit[3153]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.610000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffedb5227a0 a2=0 a3=7ffedb52278c items=0 ppid=2998 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.610000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:26:19.613000 audit[3155]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.613000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff87f97c10 a2=0 a3=7fff87f97bfc items=0 ppid=2998 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.613000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:26:19.619000 audit[3158]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.619000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff40b7b900 a2=0 a3=7fff40b7b8ec items=0 ppid=2998 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.619000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:26:19.625000 audit[3161]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.625000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd14c79140 a2=0 a3=7ffd14c7912c items=0 ppid=2998 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.625000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 03:26:19.627000 audit[3162]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.627000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff098285a0 a2=0 a3=7fff0982858c items=0 ppid=2998 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.627000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:26:19.632000 audit[3164]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.632000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd92963230 a2=0 a3=7ffd9296321c items=0 ppid=2998 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.632000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:26:19.639000 audit[3167]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.639000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffeceb3e00 a2=0 a3=7fffeceb3dec items=0 ppid=2998 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.639000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:26:19.641000 audit[3168]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.641000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc54969240 a2=0 a3=7ffc5496922c items=0 ppid=2998 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.641000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:26:19.645000 audit[3170]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.645000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffda6e052b0 a2=0 a3=7ffda6e0529c items=0 ppid=2998 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.645000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:26:19.647000 audit[3171]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.647000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe06d250c0 a2=0 a3=7ffe06d250ac items=0 ppid=2998 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.647000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:26:19.650000 audit[3173]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.650000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd95370070 a2=0 a3=7ffd9537005c items=0 ppid=2998 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.650000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:26:19.656000 audit[3176]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:26:19.656000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdcc2e4260 a2=0 a3=7ffdcc2e424c items=0 ppid=2998 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.656000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:26:19.663000 audit[3178]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:26:19.663000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc98da8e00 a2=0 a3=7ffc98da8dec items=0 ppid=2998 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.663000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:19.664000 audit[3178]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:26:19.664000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc98da8e00 a2=0 a3=7ffc98da8dec items=0 ppid=2998 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:19.664000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:20.851002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2761273449.mount: Deactivated successfully. Dec 16 03:26:21.914456 containerd[1591]: time="2025-12-16T03:26:21.914378979Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:21.916026 containerd[1591]: time="2025-12-16T03:26:21.915742713Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 03:26:21.917024 containerd[1591]: time="2025-12-16T03:26:21.916984870Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:21.919843 containerd[1591]: time="2025-12-16T03:26:21.919804669Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:21.920886 containerd[1591]: time="2025-12-16T03:26:21.920700718Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.734104963s" Dec 16 03:26:21.920886 containerd[1591]: time="2025-12-16T03:26:21.920743430Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 03:26:21.925936 containerd[1591]: time="2025-12-16T03:26:21.925875337Z" level=info msg="CreateContainer within sandbox \"1d80f5ef4ef56f74117e4284571639876fb8ccd03d367ed66b4965a57a798ff7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 03:26:21.937508 containerd[1591]: time="2025-12-16T03:26:21.937174727Z" level=info msg="Container 2f07e75c1771cc4aafd7a77a4881d7fa2cf64abc576e6a95a039a241786e0948: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:26:21.946139 containerd[1591]: time="2025-12-16T03:26:21.946088002Z" level=info msg="CreateContainer within sandbox \"1d80f5ef4ef56f74117e4284571639876fb8ccd03d367ed66b4965a57a798ff7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2f07e75c1771cc4aafd7a77a4881d7fa2cf64abc576e6a95a039a241786e0948\"" Dec 16 03:26:21.946979 containerd[1591]: time="2025-12-16T03:26:21.946944827Z" level=info msg="StartContainer for \"2f07e75c1771cc4aafd7a77a4881d7fa2cf64abc576e6a95a039a241786e0948\"" Dec 16 03:26:21.948325 containerd[1591]: time="2025-12-16T03:26:21.948207611Z" level=info msg="connecting to shim 2f07e75c1771cc4aafd7a77a4881d7fa2cf64abc576e6a95a039a241786e0948" address="unix:///run/containerd/s/3b0a61ba1285e02ac33182c260a9bc6a3b48ef645b071c6f8a3ba3ef593bfbb3" protocol=ttrpc version=3 Dec 16 03:26:21.994166 systemd[1]: Started cri-containerd-2f07e75c1771cc4aafd7a77a4881d7fa2cf64abc576e6a95a039a241786e0948.scope - libcontainer container 2f07e75c1771cc4aafd7a77a4881d7fa2cf64abc576e6a95a039a241786e0948. Dec 16 03:26:22.010000 audit: BPF prog-id=149 op=LOAD Dec 16 03:26:22.011000 audit: BPF prog-id=150 op=LOAD Dec 16 03:26:22.011000 audit[3187]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2992 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:22.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765373563313737316363346161666437613737613438383164 Dec 16 03:26:22.011000 audit: BPF prog-id=150 op=UNLOAD Dec 16 03:26:22.011000 audit[3187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:22.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765373563313737316363346161666437613737613438383164 Dec 16 03:26:22.012000 audit: BPF prog-id=151 op=LOAD Dec 16 03:26:22.012000 audit[3187]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2992 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:22.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765373563313737316363346161666437613737613438383164 Dec 16 03:26:22.012000 audit: BPF prog-id=152 op=LOAD Dec 16 03:26:22.012000 audit[3187]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2992 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:22.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765373563313737316363346161666437613737613438383164 Dec 16 03:26:22.012000 audit: BPF prog-id=152 op=UNLOAD Dec 16 03:26:22.012000 audit[3187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:22.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765373563313737316363346161666437613737613438383164 Dec 16 03:26:22.012000 audit: BPF prog-id=151 op=UNLOAD Dec 16 03:26:22.012000 audit[3187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2992 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:22.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765373563313737316363346161666437613737613438383164 Dec 16 03:26:22.012000 audit: BPF prog-id=153 op=LOAD Dec 16 03:26:22.012000 audit[3187]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2992 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:22.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765373563313737316363346161666437613737613438383164 Dec 16 03:26:22.038496 containerd[1591]: time="2025-12-16T03:26:22.038297369Z" level=info msg="StartContainer for \"2f07e75c1771cc4aafd7a77a4881d7fa2cf64abc576e6a95a039a241786e0948\" returns successfully" Dec 16 03:26:22.163034 kubelet[2854]: I1216 03:26:22.162600 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sdr8t" podStartSLOduration=4.162576866 podStartE2EDuration="4.162576866s" podCreationTimestamp="2025-12-16 03:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:26:20.157058016 +0000 UTC m=+6.347258871" watchObservedRunningTime="2025-12-16 03:26:22.162576866 +0000 UTC m=+8.352777726" Dec 16 03:26:23.581954 kubelet[2854]: I1216 03:26:23.581864 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-p5596" podStartSLOduration=2.846003973 podStartE2EDuration="5.581840635s" podCreationTimestamp="2025-12-16 03:26:18 +0000 UTC" firstStartedPulling="2025-12-16 03:26:19.186173358 +0000 UTC m=+5.376374212" lastFinishedPulling="2025-12-16 03:26:21.922010022 +0000 UTC m=+8.112210874" observedRunningTime="2025-12-16 03:26:22.163619029 +0000 UTC m=+8.353819889" watchObservedRunningTime="2025-12-16 03:26:23.581840635 +0000 UTC m=+9.772041495" Dec 16 03:26:27.627548 sudo[1920]: pam_unix(sudo:session): session closed for user root Dec 16 03:26:27.657941 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 03:26:27.658052 kernel: audit: type=1106 audit(1765855587.626:503): pid=1920 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:26:27.626000 audit[1920]: USER_END pid=1920 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:26:27.626000 audit[1920]: CRED_DISP pid=1920 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:26:27.688936 kernel: audit: type=1104 audit(1765855587.626:504): pid=1920 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:26:27.701938 sshd[1919]: Connection closed by 147.75.109.163 port 59782 Dec 16 03:26:27.702716 sshd-session[1915]: pam_unix(sshd:session): session closed for user core Dec 16 03:26:27.745464 kernel: audit: type=1106 audit(1765855587.706:505): pid=1915 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:26:27.706000 audit[1915]: USER_END pid=1915 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:26:27.706000 audit[1915]: CRED_DISP pid=1915 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:26:27.748336 systemd[1]: sshd@8-10.128.0.16:22-147.75.109.163:59782.service: Deactivated successfully. Dec 16 03:26:27.754728 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 03:26:27.755158 systemd[1]: session-10.scope: Consumed 6.979s CPU time, 231.8M memory peak. Dec 16 03:26:27.760940 systemd-logind[1566]: Session 10 logged out. Waiting for processes to exit. Dec 16 03:26:27.764830 systemd-logind[1566]: Removed session 10. Dec 16 03:26:27.773943 kernel: audit: type=1104 audit(1765855587.706:506): pid=1915 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:26:27.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.16:22-147.75.109.163:59782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:27.805050 kernel: audit: type=1131 audit(1765855587.747:507): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.16:22-147.75.109.163:59782 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:29.258000 audit[3272]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:29.275941 kernel: audit: type=1325 audit(1765855589.258:508): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:29.258000 audit[3272]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff169e5970 a2=0 a3=7fff169e595c items=0 ppid=2998 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:29.314937 kernel: audit: type=1300 audit(1765855589.258:508): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff169e5970 a2=0 a3=7fff169e595c items=0 ppid=2998 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:29.258000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:29.281000 audit[3272]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:29.352332 kernel: audit: type=1327 audit(1765855589.258:508): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:29.352425 kernel: audit: type=1325 audit(1765855589.281:509): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:29.281000 audit[3272]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff169e5970 a2=0 a3=0 items=0 ppid=2998 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:29.385934 kernel: audit: type=1300 audit(1765855589.281:509): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff169e5970 a2=0 a3=0 items=0 ppid=2998 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:29.281000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:29.424000 audit[3274]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:29.424000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdeae1fbb0 a2=0 a3=7ffdeae1fb9c items=0 ppid=2998 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:29.424000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:29.431000 audit[3274]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:29.431000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdeae1fbb0 a2=0 a3=0 items=0 ppid=2998 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:29.431000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:33.245299 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 03:26:33.245498 kernel: audit: type=1325 audit(1765855593.222:512): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:33.222000 audit[3276]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:33.286975 kernel: audit: type=1300 audit(1765855593.222:512): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff55918860 a2=0 a3=7fff5591884c items=0 ppid=2998 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:33.222000 audit[3276]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff55918860 a2=0 a3=7fff5591884c items=0 ppid=2998 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:33.222000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:33.307975 kernel: audit: type=1327 audit(1765855593.222:512): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:33.248000 audit[3276]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:33.326930 kernel: audit: type=1325 audit(1765855593.248:513): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3276 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:33.248000 audit[3276]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff55918860 a2=0 a3=0 items=0 ppid=2998 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:33.360998 kernel: audit: type=1300 audit(1765855593.248:513): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff55918860 a2=0 a3=0 items=0 ppid=2998 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:33.248000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:33.305000 audit[3278]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:33.393698 kernel: audit: type=1327 audit(1765855593.248:513): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:33.393805 kernel: audit: type=1325 audit(1765855593.305:514): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:33.305000 audit[3278]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd2900f700 a2=0 a3=7ffd2900f6ec items=0 ppid=2998 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:33.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:33.443219 kernel: audit: type=1300 audit(1765855593.305:514): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd2900f700 a2=0 a3=7ffd2900f6ec items=0 ppid=2998 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:33.445006 kernel: audit: type=1327 audit(1765855593.305:514): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:33.445073 kernel: audit: type=1325 audit(1765855593.311:515): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:33.311000 audit[3278]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:33.311000 audit[3278]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd2900f700 a2=0 a3=0 items=0 ppid=2998 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:33.311000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:34.383000 audit[3280]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:34.383000 audit[3280]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff7e957650 a2=0 a3=7fff7e95763c items=0 ppid=2998 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:34.383000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:34.388000 audit[3280]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:34.388000 audit[3280]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7e957650 a2=0 a3=0 items=0 ppid=2998 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:34.388000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:35.544000 audit[3282]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:35.544000 audit[3282]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe5a036050 a2=0 a3=7ffe5a03603c items=0 ppid=2998 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:35.544000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:35.549000 audit[3282]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:35.549000 audit[3282]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe5a036050 a2=0 a3=0 items=0 ppid=2998 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:35.549000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:35.589227 systemd[1]: Created slice kubepods-besteffort-pod6c665283_f815_4ff0_a090_8e810a7fe88b.slice - libcontainer container kubepods-besteffort-pod6c665283_f815_4ff0_a090_8e810a7fe88b.slice. Dec 16 03:26:35.675952 kubelet[2854]: I1216 03:26:35.675873 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c665283-f815-4ff0-a090-8e810a7fe88b-tigera-ca-bundle\") pod \"calico-typha-7bb94b7565-hs4qt\" (UID: \"6c665283-f815-4ff0-a090-8e810a7fe88b\") " pod="calico-system/calico-typha-7bb94b7565-hs4qt" Dec 16 03:26:35.676608 kubelet[2854]: I1216 03:26:35.676079 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6c665283-f815-4ff0-a090-8e810a7fe88b-typha-certs\") pod \"calico-typha-7bb94b7565-hs4qt\" (UID: \"6c665283-f815-4ff0-a090-8e810a7fe88b\") " pod="calico-system/calico-typha-7bb94b7565-hs4qt" Dec 16 03:26:35.676608 kubelet[2854]: I1216 03:26:35.676207 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knlmp\" (UniqueName: \"kubernetes.io/projected/6c665283-f815-4ff0-a090-8e810a7fe88b-kube-api-access-knlmp\") pod \"calico-typha-7bb94b7565-hs4qt\" (UID: \"6c665283-f815-4ff0-a090-8e810a7fe88b\") " pod="calico-system/calico-typha-7bb94b7565-hs4qt" Dec 16 03:26:35.717800 systemd[1]: Created slice kubepods-besteffort-pod32f015bd_2168_46f2_b76e_16ee6c7c49e8.slice - libcontainer container kubepods-besteffort-pod32f015bd_2168_46f2_b76e_16ee6c7c49e8.slice. Dec 16 03:26:35.777097 kubelet[2854]: I1216 03:26:35.777040 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/32f015bd-2168-46f2-b76e-16ee6c7c49e8-var-lib-calico\") pod \"calico-node-t9hrz\" (UID: \"32f015bd-2168-46f2-b76e-16ee6c7c49e8\") " pod="calico-system/calico-node-t9hrz" Dec 16 03:26:35.777307 kubelet[2854]: I1216 03:26:35.777115 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/32f015bd-2168-46f2-b76e-16ee6c7c49e8-cni-net-dir\") pod \"calico-node-t9hrz\" (UID: \"32f015bd-2168-46f2-b76e-16ee6c7c49e8\") " pod="calico-system/calico-node-t9hrz" Dec 16 03:26:35.777307 kubelet[2854]: I1216 03:26:35.777147 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/32f015bd-2168-46f2-b76e-16ee6c7c49e8-cni-log-dir\") pod \"calico-node-t9hrz\" (UID: \"32f015bd-2168-46f2-b76e-16ee6c7c49e8\") " pod="calico-system/calico-node-t9hrz" Dec 16 03:26:35.777307 kubelet[2854]: I1216 03:26:35.777172 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng6br\" (UniqueName: \"kubernetes.io/projected/32f015bd-2168-46f2-b76e-16ee6c7c49e8-kube-api-access-ng6br\") pod \"calico-node-t9hrz\" (UID: \"32f015bd-2168-46f2-b76e-16ee6c7c49e8\") " pod="calico-system/calico-node-t9hrz" Dec 16 03:26:35.777307 kubelet[2854]: I1216 03:26:35.777199 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32f015bd-2168-46f2-b76e-16ee6c7c49e8-lib-modules\") pod \"calico-node-t9hrz\" (UID: \"32f015bd-2168-46f2-b76e-16ee6c7c49e8\") " pod="calico-system/calico-node-t9hrz" Dec 16 03:26:35.777307 kubelet[2854]: I1216 03:26:35.777224 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/32f015bd-2168-46f2-b76e-16ee6c7c49e8-cni-bin-dir\") pod \"calico-node-t9hrz\" (UID: \"32f015bd-2168-46f2-b76e-16ee6c7c49e8\") " pod="calico-system/calico-node-t9hrz" Dec 16 03:26:35.777581 kubelet[2854]: I1216 03:26:35.777250 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32f015bd-2168-46f2-b76e-16ee6c7c49e8-tigera-ca-bundle\") pod \"calico-node-t9hrz\" (UID: \"32f015bd-2168-46f2-b76e-16ee6c7c49e8\") " pod="calico-system/calico-node-t9hrz" Dec 16 03:26:35.777581 kubelet[2854]: I1216 03:26:35.777273 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/32f015bd-2168-46f2-b76e-16ee6c7c49e8-var-run-calico\") pod \"calico-node-t9hrz\" (UID: \"32f015bd-2168-46f2-b76e-16ee6c7c49e8\") " pod="calico-system/calico-node-t9hrz" Dec 16 03:26:35.777581 kubelet[2854]: I1216 03:26:35.777311 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/32f015bd-2168-46f2-b76e-16ee6c7c49e8-policysync\") pod \"calico-node-t9hrz\" (UID: \"32f015bd-2168-46f2-b76e-16ee6c7c49e8\") " pod="calico-system/calico-node-t9hrz" Dec 16 03:26:35.777581 kubelet[2854]: I1216 03:26:35.777351 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/32f015bd-2168-46f2-b76e-16ee6c7c49e8-flexvol-driver-host\") pod \"calico-node-t9hrz\" (UID: \"32f015bd-2168-46f2-b76e-16ee6c7c49e8\") " pod="calico-system/calico-node-t9hrz" Dec 16 03:26:35.777581 kubelet[2854]: I1216 03:26:35.777380 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32f015bd-2168-46f2-b76e-16ee6c7c49e8-xtables-lock\") pod \"calico-node-t9hrz\" (UID: \"32f015bd-2168-46f2-b76e-16ee6c7c49e8\") " pod="calico-system/calico-node-t9hrz" Dec 16 03:26:35.777816 kubelet[2854]: I1216 03:26:35.777423 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/32f015bd-2168-46f2-b76e-16ee6c7c49e8-node-certs\") pod \"calico-node-t9hrz\" (UID: \"32f015bd-2168-46f2-b76e-16ee6c7c49e8\") " pod="calico-system/calico-node-t9hrz" Dec 16 03:26:35.840038 kubelet[2854]: E1216 03:26:35.838809 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mjkcx" podUID="d069b10a-0bb5-4869-a283-bc34fbcea4f8" Dec 16 03:26:35.879939 kubelet[2854]: I1216 03:26:35.879047 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrxhn\" (UniqueName: \"kubernetes.io/projected/d069b10a-0bb5-4869-a283-bc34fbcea4f8-kube-api-access-hrxhn\") pod \"csi-node-driver-mjkcx\" (UID: \"d069b10a-0bb5-4869-a283-bc34fbcea4f8\") " pod="calico-system/csi-node-driver-mjkcx" Dec 16 03:26:35.880445 kubelet[2854]: I1216 03:26:35.880364 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d069b10a-0bb5-4869-a283-bc34fbcea4f8-registration-dir\") pod \"csi-node-driver-mjkcx\" (UID: \"d069b10a-0bb5-4869-a283-bc34fbcea4f8\") " pod="calico-system/csi-node-driver-mjkcx" Dec 16 03:26:35.880445 kubelet[2854]: I1216 03:26:35.880408 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d069b10a-0bb5-4869-a283-bc34fbcea4f8-socket-dir\") pod \"csi-node-driver-mjkcx\" (UID: \"d069b10a-0bb5-4869-a283-bc34fbcea4f8\") " pod="calico-system/csi-node-driver-mjkcx" Dec 16 03:26:35.880925 kubelet[2854]: I1216 03:26:35.880802 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d069b10a-0bb5-4869-a283-bc34fbcea4f8-kubelet-dir\") pod \"csi-node-driver-mjkcx\" (UID: \"d069b10a-0bb5-4869-a283-bc34fbcea4f8\") " pod="calico-system/csi-node-driver-mjkcx" Dec 16 03:26:35.881171 kubelet[2854]: I1216 03:26:35.881099 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d069b10a-0bb5-4869-a283-bc34fbcea4f8-varrun\") pod \"csi-node-driver-mjkcx\" (UID: \"d069b10a-0bb5-4869-a283-bc34fbcea4f8\") " pod="calico-system/csi-node-driver-mjkcx" Dec 16 03:26:35.884060 kubelet[2854]: E1216 03:26:35.884013 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.884323 kubelet[2854]: W1216 03:26:35.884230 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.884453 kubelet[2854]: E1216 03:26:35.884430 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.885426 kubelet[2854]: E1216 03:26:35.885380 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.885701 kubelet[2854]: W1216 03:26:35.885405 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.885701 kubelet[2854]: E1216 03:26:35.885679 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.887242 kubelet[2854]: E1216 03:26:35.887187 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.887549 kubelet[2854]: W1216 03:26:35.887454 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.887549 kubelet[2854]: E1216 03:26:35.887484 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.893244 kubelet[2854]: E1216 03:26:35.891385 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.893244 kubelet[2854]: W1216 03:26:35.891404 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.893244 kubelet[2854]: E1216 03:26:35.891421 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.893734 kubelet[2854]: E1216 03:26:35.893661 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.893734 kubelet[2854]: W1216 03:26:35.893681 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.893734 kubelet[2854]: E1216 03:26:35.893699 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.912027 containerd[1591]: time="2025-12-16T03:26:35.911733381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bb94b7565-hs4qt,Uid:6c665283-f815-4ff0-a090-8e810a7fe88b,Namespace:calico-system,Attempt:0,}" Dec 16 03:26:35.940931 kubelet[2854]: E1216 03:26:35.940578 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.940931 kubelet[2854]: W1216 03:26:35.940601 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.940931 kubelet[2854]: E1216 03:26:35.940622 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.961252 containerd[1591]: time="2025-12-16T03:26:35.961148809Z" level=info msg="connecting to shim 7ffb1f1938f136a06f05238689ba172fef806a15625fa3773f7218495e6cec0f" address="unix:///run/containerd/s/4f2efb3a4ca154423e2b3e1bbd113fbe3851d6c6bc4d946f3ad365f6dd39a9a6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:26:35.982474 kubelet[2854]: E1216 03:26:35.982443 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.982474 kubelet[2854]: W1216 03:26:35.982473 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.983004 kubelet[2854]: E1216 03:26:35.982500 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.983004 kubelet[2854]: E1216 03:26:35.982925 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.983004 kubelet[2854]: W1216 03:26:35.982942 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.983004 kubelet[2854]: E1216 03:26:35.982963 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.984247 kubelet[2854]: E1216 03:26:35.983382 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.984247 kubelet[2854]: W1216 03:26:35.983397 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.984247 kubelet[2854]: E1216 03:26:35.983415 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.984247 kubelet[2854]: E1216 03:26:35.983783 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.984247 kubelet[2854]: W1216 03:26:35.983798 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.984247 kubelet[2854]: E1216 03:26:35.983827 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.984247 kubelet[2854]: E1216 03:26:35.984213 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.984247 kubelet[2854]: W1216 03:26:35.984227 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.984247 kubelet[2854]: E1216 03:26:35.984243 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.985701 kubelet[2854]: E1216 03:26:35.984631 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.985701 kubelet[2854]: W1216 03:26:35.984645 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.985701 kubelet[2854]: E1216 03:26:35.984661 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.985701 kubelet[2854]: E1216 03:26:35.985052 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.985701 kubelet[2854]: W1216 03:26:35.985066 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.985701 kubelet[2854]: E1216 03:26:35.985083 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.985701 kubelet[2854]: E1216 03:26:35.985430 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.985701 kubelet[2854]: W1216 03:26:35.985444 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.985701 kubelet[2854]: E1216 03:26:35.985459 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.987651 kubelet[2854]: E1216 03:26:35.985842 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.987651 kubelet[2854]: W1216 03:26:35.985857 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.987651 kubelet[2854]: E1216 03:26:35.985874 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.987651 kubelet[2854]: E1216 03:26:35.986237 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.987651 kubelet[2854]: W1216 03:26:35.986251 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.987651 kubelet[2854]: E1216 03:26:35.986269 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.987651 kubelet[2854]: E1216 03:26:35.986869 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.987651 kubelet[2854]: W1216 03:26:35.986884 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.987651 kubelet[2854]: E1216 03:26:35.986901 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.987651 kubelet[2854]: E1216 03:26:35.987259 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.989037 kubelet[2854]: W1216 03:26:35.987274 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.989037 kubelet[2854]: E1216 03:26:35.987292 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.989037 kubelet[2854]: E1216 03:26:35.987608 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.989037 kubelet[2854]: W1216 03:26:35.987622 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.989037 kubelet[2854]: E1216 03:26:35.987638 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.989037 kubelet[2854]: E1216 03:26:35.987952 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.989037 kubelet[2854]: W1216 03:26:35.987967 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.989037 kubelet[2854]: E1216 03:26:35.987984 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.989037 kubelet[2854]: E1216 03:26:35.988307 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.989037 kubelet[2854]: W1216 03:26:35.988322 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.990239 kubelet[2854]: E1216 03:26:35.988338 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.990239 kubelet[2854]: E1216 03:26:35.988664 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.990239 kubelet[2854]: W1216 03:26:35.988677 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.990239 kubelet[2854]: E1216 03:26:35.988693 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.990239 kubelet[2854]: E1216 03:26:35.989094 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.990239 kubelet[2854]: W1216 03:26:35.989110 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.990239 kubelet[2854]: E1216 03:26:35.989126 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.990239 kubelet[2854]: E1216 03:26:35.989542 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.990239 kubelet[2854]: W1216 03:26:35.989555 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.990239 kubelet[2854]: E1216 03:26:35.989570 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.990725 kubelet[2854]: E1216 03:26:35.989967 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.990725 kubelet[2854]: W1216 03:26:35.989981 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.990725 kubelet[2854]: E1216 03:26:35.989997 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.990725 kubelet[2854]: E1216 03:26:35.990332 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.990725 kubelet[2854]: W1216 03:26:35.990345 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.990725 kubelet[2854]: E1216 03:26:35.990361 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.990725 kubelet[2854]: E1216 03:26:35.990700 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.990725 kubelet[2854]: W1216 03:26:35.990712 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.990725 kubelet[2854]: E1216 03:26:35.990725 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.995136 kubelet[2854]: E1216 03:26:35.991142 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.995136 kubelet[2854]: W1216 03:26:35.991157 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.995136 kubelet[2854]: E1216 03:26:35.991174 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.995136 kubelet[2854]: E1216 03:26:35.991531 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.995136 kubelet[2854]: W1216 03:26:35.991544 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.995136 kubelet[2854]: E1216 03:26:35.991559 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.995136 kubelet[2854]: E1216 03:26:35.991895 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.995136 kubelet[2854]: W1216 03:26:35.991928 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.995136 kubelet[2854]: E1216 03:26:35.991946 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:35.995136 kubelet[2854]: E1216 03:26:35.992322 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:35.996983 kubelet[2854]: W1216 03:26:35.992336 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:35.996983 kubelet[2854]: E1216 03:26:35.992353 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:36.001200 systemd[1]: Started cri-containerd-7ffb1f1938f136a06f05238689ba172fef806a15625fa3773f7218495e6cec0f.scope - libcontainer container 7ffb1f1938f136a06f05238689ba172fef806a15625fa3773f7218495e6cec0f. Dec 16 03:26:36.012254 kubelet[2854]: E1216 03:26:36.012226 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:36.012254 kubelet[2854]: W1216 03:26:36.012246 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:36.012391 kubelet[2854]: E1216 03:26:36.012265 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:36.021000 audit: BPF prog-id=154 op=LOAD Dec 16 03:26:36.021000 audit: BPF prog-id=155 op=LOAD Dec 16 03:26:36.021000 audit[3315]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3303 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.021000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766666231663139333866313336613036663035323338363839626131 Dec 16 03:26:36.022000 audit: BPF prog-id=155 op=UNLOAD Dec 16 03:26:36.022000 audit[3315]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3303 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766666231663139333866313336613036663035323338363839626131 Dec 16 03:26:36.022000 audit: BPF prog-id=156 op=LOAD Dec 16 03:26:36.022000 audit[3315]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3303 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766666231663139333866313336613036663035323338363839626131 Dec 16 03:26:36.022000 audit: BPF prog-id=157 op=LOAD Dec 16 03:26:36.022000 audit[3315]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3303 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766666231663139333866313336613036663035323338363839626131 Dec 16 03:26:36.022000 audit: BPF prog-id=157 op=UNLOAD Dec 16 03:26:36.022000 audit[3315]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3303 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766666231663139333866313336613036663035323338363839626131 Dec 16 03:26:36.022000 audit: BPF prog-id=156 op=UNLOAD Dec 16 03:26:36.022000 audit[3315]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3303 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766666231663139333866313336613036663035323338363839626131 Dec 16 03:26:36.022000 audit: BPF prog-id=158 op=LOAD Dec 16 03:26:36.022000 audit[3315]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3303 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.022000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766666231663139333866313336613036663035323338363839626131 Dec 16 03:26:36.026683 containerd[1591]: time="2025-12-16T03:26:36.026645949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t9hrz,Uid:32f015bd-2168-46f2-b76e-16ee6c7c49e8,Namespace:calico-system,Attempt:0,}" Dec 16 03:26:36.062451 containerd[1591]: time="2025-12-16T03:26:36.062397111Z" level=info msg="connecting to shim d64c5204e38bdaa6c4e1c96b57e37d55aef3f49c26ae63f236af92449c364b52" address="unix:///run/containerd/s/229523360a7dd45da10ac69471ac8773444bf65c36950c1f599973cdd5bfe77a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:26:36.108236 systemd[1]: Started cri-containerd-d64c5204e38bdaa6c4e1c96b57e37d55aef3f49c26ae63f236af92449c364b52.scope - libcontainer container d64c5204e38bdaa6c4e1c96b57e37d55aef3f49c26ae63f236af92449c364b52. Dec 16 03:26:36.117644 containerd[1591]: time="2025-12-16T03:26:36.117596175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bb94b7565-hs4qt,Uid:6c665283-f815-4ff0-a090-8e810a7fe88b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ffb1f1938f136a06f05238689ba172fef806a15625fa3773f7218495e6cec0f\"" Dec 16 03:26:36.122956 containerd[1591]: time="2025-12-16T03:26:36.122640840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 03:26:36.136000 audit: BPF prog-id=159 op=LOAD Dec 16 03:26:36.137000 audit: BPF prog-id=160 op=LOAD Dec 16 03:26:36.137000 audit[3381]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3370 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436346335323034653338626461613663346531633936623537653337 Dec 16 03:26:36.137000 audit: BPF prog-id=160 op=UNLOAD Dec 16 03:26:36.137000 audit[3381]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436346335323034653338626461613663346531633936623537653337 Dec 16 03:26:36.137000 audit: BPF prog-id=161 op=LOAD Dec 16 03:26:36.137000 audit[3381]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3370 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436346335323034653338626461613663346531633936623537653337 Dec 16 03:26:36.137000 audit: BPF prog-id=162 op=LOAD Dec 16 03:26:36.137000 audit[3381]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3370 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436346335323034653338626461613663346531633936623537653337 Dec 16 03:26:36.137000 audit: BPF prog-id=162 op=UNLOAD Dec 16 03:26:36.137000 audit[3381]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436346335323034653338626461613663346531633936623537653337 Dec 16 03:26:36.137000 audit: BPF prog-id=161 op=UNLOAD Dec 16 03:26:36.137000 audit[3381]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436346335323034653338626461613663346531633936623537653337 Dec 16 03:26:36.137000 audit: BPF prog-id=163 op=LOAD Dec 16 03:26:36.137000 audit[3381]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3370 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436346335323034653338626461613663346531633936623537653337 Dec 16 03:26:36.167496 containerd[1591]: time="2025-12-16T03:26:36.167456632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t9hrz,Uid:32f015bd-2168-46f2-b76e-16ee6c7c49e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"d64c5204e38bdaa6c4e1c96b57e37d55aef3f49c26ae63f236af92449c364b52\"" Dec 16 03:26:36.562000 audit[3417]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3417 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:36.562000 audit[3417]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff6b4d63e0 a2=0 a3=7fff6b4d63cc items=0 ppid=2998 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:36.568000 audit[3417]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3417 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:36.568000 audit[3417]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6b4d63e0 a2=0 a3=0 items=0 ppid=2998 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:36.568000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:37.651812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount99596824.mount: Deactivated successfully. Dec 16 03:26:38.075509 kubelet[2854]: E1216 03:26:38.075381 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mjkcx" podUID="d069b10a-0bb5-4869-a283-bc34fbcea4f8" Dec 16 03:26:38.540411 containerd[1591]: time="2025-12-16T03:26:38.540349205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:38.541889 containerd[1591]: time="2025-12-16T03:26:38.541688050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35230631" Dec 16 03:26:38.543037 containerd[1591]: time="2025-12-16T03:26:38.542983912Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:38.545942 containerd[1591]: time="2025-12-16T03:26:38.545604852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:38.546650 containerd[1591]: time="2025-12-16T03:26:38.546533416Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.423844751s" Dec 16 03:26:38.546650 containerd[1591]: time="2025-12-16T03:26:38.546573313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 03:26:38.548376 containerd[1591]: time="2025-12-16T03:26:38.548045259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 03:26:38.579385 containerd[1591]: time="2025-12-16T03:26:38.578295752Z" level=info msg="CreateContainer within sandbox \"7ffb1f1938f136a06f05238689ba172fef806a15625fa3773f7218495e6cec0f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 03:26:38.592425 containerd[1591]: time="2025-12-16T03:26:38.592383635Z" level=info msg="Container 1a1bae54ac6a0cc5aa1ef041c9c010d1ea4aa48ff2c669670d746c178ad31f1a: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:26:38.608712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount609355059.mount: Deactivated successfully. Dec 16 03:26:38.616520 containerd[1591]: time="2025-12-16T03:26:38.616485830Z" level=info msg="CreateContainer within sandbox \"7ffb1f1938f136a06f05238689ba172fef806a15625fa3773f7218495e6cec0f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1a1bae54ac6a0cc5aa1ef041c9c010d1ea4aa48ff2c669670d746c178ad31f1a\"" Dec 16 03:26:38.618360 containerd[1591]: time="2025-12-16T03:26:38.618317597Z" level=info msg="StartContainer for \"1a1bae54ac6a0cc5aa1ef041c9c010d1ea4aa48ff2c669670d746c178ad31f1a\"" Dec 16 03:26:38.621241 containerd[1591]: time="2025-12-16T03:26:38.621209954Z" level=info msg="connecting to shim 1a1bae54ac6a0cc5aa1ef041c9c010d1ea4aa48ff2c669670d746c178ad31f1a" address="unix:///run/containerd/s/4f2efb3a4ca154423e2b3e1bbd113fbe3851d6c6bc4d946f3ad365f6dd39a9a6" protocol=ttrpc version=3 Dec 16 03:26:38.651164 systemd[1]: Started cri-containerd-1a1bae54ac6a0cc5aa1ef041c9c010d1ea4aa48ff2c669670d746c178ad31f1a.scope - libcontainer container 1a1bae54ac6a0cc5aa1ef041c9c010d1ea4aa48ff2c669670d746c178ad31f1a. Dec 16 03:26:38.670000 audit: BPF prog-id=164 op=LOAD Dec 16 03:26:38.676687 kernel: kauditd_printk_skb: 64 callbacks suppressed Dec 16 03:26:38.676789 kernel: audit: type=1334 audit(1765855598.670:538): prog-id=164 op=LOAD Dec 16 03:26:38.678000 audit: BPF prog-id=165 op=LOAD Dec 16 03:26:38.690966 kernel: audit: type=1334 audit(1765855598.678:539): prog-id=165 op=LOAD Dec 16 03:26:38.678000 audit[3428]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3303 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:38.720648 kernel: audit: type=1300 audit(1765855598.678:539): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3303 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:38.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161316261653534616336613063633561613165663034316339633031 Dec 16 03:26:38.752046 kernel: audit: type=1327 audit(1765855598.678:539): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161316261653534616336613063633561613165663034316339633031 Dec 16 03:26:38.752435 kernel: audit: type=1334 audit(1765855598.678:540): prog-id=165 op=UNLOAD Dec 16 03:26:38.678000 audit: BPF prog-id=165 op=UNLOAD Dec 16 03:26:38.678000 audit[3428]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3303 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:38.788403 kernel: audit: type=1300 audit(1765855598.678:540): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3303 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:38.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161316261653534616336613063633561613165663034316339633031 Dec 16 03:26:38.817062 kernel: audit: type=1327 audit(1765855598.678:540): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161316261653534616336613063633561613165663034316339633031 Dec 16 03:26:38.678000 audit: BPF prog-id=166 op=LOAD Dec 16 03:26:38.829076 kernel: audit: type=1334 audit(1765855598.678:541): prog-id=166 op=LOAD Dec 16 03:26:38.829157 kernel: audit: type=1300 audit(1765855598.678:541): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3303 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:38.678000 audit[3428]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3303 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:38.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161316261653534616336613063633561613165663034316339633031 Dec 16 03:26:38.886066 containerd[1591]: time="2025-12-16T03:26:38.864726119Z" level=info msg="StartContainer for \"1a1bae54ac6a0cc5aa1ef041c9c010d1ea4aa48ff2c669670d746c178ad31f1a\" returns successfully" Dec 16 03:26:38.678000 audit: BPF prog-id=167 op=LOAD Dec 16 03:26:38.678000 audit[3428]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3303 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:38.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161316261653534616336613063633561613165663034316339633031 Dec 16 03:26:38.678000 audit: BPF prog-id=167 op=UNLOAD Dec 16 03:26:38.678000 audit[3428]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3303 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:38.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161316261653534616336613063633561613165663034316339633031 Dec 16 03:26:38.678000 audit: BPF prog-id=166 op=UNLOAD Dec 16 03:26:38.678000 audit[3428]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3303 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:38.889945 kernel: audit: type=1327 audit(1765855598.678:541): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161316261653534616336613063633561613165663034316339633031 Dec 16 03:26:38.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161316261653534616336613063633561613165663034316339633031 Dec 16 03:26:38.678000 audit: BPF prog-id=168 op=LOAD Dec 16 03:26:38.678000 audit[3428]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3303 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:38.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161316261653534616336613063633561613165663034316339633031 Dec 16 03:26:39.270222 kubelet[2854]: E1216 03:26:39.270184 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.270867 kubelet[2854]: W1216 03:26:39.270393 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.270867 kubelet[2854]: E1216 03:26:39.270434 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.271015 kubelet[2854]: E1216 03:26:39.270878 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.271015 kubelet[2854]: W1216 03:26:39.270894 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.271015 kubelet[2854]: E1216 03:26:39.270945 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.272964 kubelet[2854]: E1216 03:26:39.271966 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.272964 kubelet[2854]: W1216 03:26:39.271990 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.272964 kubelet[2854]: E1216 03:26:39.272024 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.272964 kubelet[2854]: E1216 03:26:39.272397 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.272964 kubelet[2854]: W1216 03:26:39.272410 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.272964 kubelet[2854]: E1216 03:26:39.272438 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.272964 kubelet[2854]: E1216 03:26:39.272880 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.272964 kubelet[2854]: W1216 03:26:39.272893 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.273404 kubelet[2854]: E1216 03:26:39.273025 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.275074 kubelet[2854]: E1216 03:26:39.275047 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.275074 kubelet[2854]: W1216 03:26:39.275069 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.275213 kubelet[2854]: E1216 03:26:39.275087 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.275477 kubelet[2854]: E1216 03:26:39.275436 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.275477 kubelet[2854]: W1216 03:26:39.275474 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.275613 kubelet[2854]: E1216 03:26:39.275491 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.275851 kubelet[2854]: E1216 03:26:39.275821 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.275951 kubelet[2854]: W1216 03:26:39.275854 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.275951 kubelet[2854]: E1216 03:26:39.275870 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.276251 kubelet[2854]: E1216 03:26:39.276221 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.276328 kubelet[2854]: W1216 03:26:39.276257 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.276328 kubelet[2854]: E1216 03:26:39.276273 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.276761 kubelet[2854]: E1216 03:26:39.276740 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.276761 kubelet[2854]: W1216 03:26:39.276759 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.276897 kubelet[2854]: E1216 03:26:39.276775 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.279176 kubelet[2854]: E1216 03:26:39.279141 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.279258 kubelet[2854]: W1216 03:26:39.279183 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.279258 kubelet[2854]: E1216 03:26:39.279199 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.280147 kubelet[2854]: E1216 03:26:39.280124 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.280147 kubelet[2854]: W1216 03:26:39.280148 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.280302 kubelet[2854]: E1216 03:26:39.280164 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.280491 kubelet[2854]: E1216 03:26:39.280466 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.280491 kubelet[2854]: W1216 03:26:39.280489 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.280624 kubelet[2854]: E1216 03:26:39.280505 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.280882 kubelet[2854]: E1216 03:26:39.280848 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.280882 kubelet[2854]: W1216 03:26:39.280871 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.281013 kubelet[2854]: E1216 03:26:39.280888 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.281578 kubelet[2854]: E1216 03:26:39.281538 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.281578 kubelet[2854]: W1216 03:26:39.281561 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.281578 kubelet[2854]: E1216 03:26:39.281578 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.312703 kubelet[2854]: E1216 03:26:39.312608 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.312703 kubelet[2854]: W1216 03:26:39.312638 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.312869 kubelet[2854]: E1216 03:26:39.312779 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.313949 kubelet[2854]: E1216 03:26:39.313461 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.313949 kubelet[2854]: W1216 03:26:39.313485 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.313949 kubelet[2854]: E1216 03:26:39.313503 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.314638 kubelet[2854]: E1216 03:26:39.314614 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.314839 kubelet[2854]: W1216 03:26:39.314637 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.314982 kubelet[2854]: E1216 03:26:39.314847 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.316344 kubelet[2854]: E1216 03:26:39.316313 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.316344 kubelet[2854]: W1216 03:26:39.316336 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.316501 kubelet[2854]: E1216 03:26:39.316473 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.317450 kubelet[2854]: E1216 03:26:39.317413 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.317450 kubelet[2854]: W1216 03:26:39.317439 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.318030 kubelet[2854]: E1216 03:26:39.317988 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.320954 kubelet[2854]: E1216 03:26:39.320361 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.320954 kubelet[2854]: W1216 03:26:39.320385 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.320954 kubelet[2854]: E1216 03:26:39.320813 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.322478 kubelet[2854]: E1216 03:26:39.322446 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.322478 kubelet[2854]: W1216 03:26:39.322479 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.322611 kubelet[2854]: E1216 03:26:39.322498 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.323894 kubelet[2854]: E1216 03:26:39.323871 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.323894 kubelet[2854]: W1216 03:26:39.323894 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.324072 kubelet[2854]: E1216 03:26:39.323972 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.325375 kubelet[2854]: E1216 03:26:39.325351 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.326183 kubelet[2854]: W1216 03:26:39.326133 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.327104 kubelet[2854]: E1216 03:26:39.326177 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.327340 kubelet[2854]: E1216 03:26:39.327316 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.327340 kubelet[2854]: W1216 03:26:39.327339 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.327474 kubelet[2854]: E1216 03:26:39.327356 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.328898 kubelet[2854]: E1216 03:26:39.328834 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.328898 kubelet[2854]: W1216 03:26:39.328873 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.328898 kubelet[2854]: E1216 03:26:39.328892 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.329543 kubelet[2854]: E1216 03:26:39.329320 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.329543 kubelet[2854]: W1216 03:26:39.329342 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.329543 kubelet[2854]: E1216 03:26:39.329359 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.330375 kubelet[2854]: E1216 03:26:39.330165 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.330375 kubelet[2854]: W1216 03:26:39.330188 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.330514 kubelet[2854]: E1216 03:26:39.330207 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.331953 kubelet[2854]: E1216 03:26:39.331871 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.331953 kubelet[2854]: W1216 03:26:39.331898 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.331953 kubelet[2854]: E1216 03:26:39.331930 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.332393 kubelet[2854]: E1216 03:26:39.332302 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.332393 kubelet[2854]: W1216 03:26:39.332315 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.332393 kubelet[2854]: E1216 03:26:39.332331 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.333711 kubelet[2854]: E1216 03:26:39.333643 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.333711 kubelet[2854]: W1216 03:26:39.333676 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.334095 kubelet[2854]: E1216 03:26:39.333694 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.334860 kubelet[2854]: E1216 03:26:39.334827 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.334860 kubelet[2854]: W1216 03:26:39.334849 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.335004 kubelet[2854]: E1216 03:26:39.334866 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.336085 kubelet[2854]: E1216 03:26:39.336050 2854 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:26:39.336085 kubelet[2854]: W1216 03:26:39.336072 2854 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:26:39.336529 kubelet[2854]: E1216 03:26:39.336090 2854 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:26:39.628896 containerd[1591]: time="2025-12-16T03:26:39.628837728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:39.630269 containerd[1591]: time="2025-12-16T03:26:39.630049217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 03:26:39.631442 containerd[1591]: time="2025-12-16T03:26:39.631403692Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:39.635774 containerd[1591]: time="2025-12-16T03:26:39.635735559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:39.636669 containerd[1591]: time="2025-12-16T03:26:39.636598799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.088512908s" Dec 16 03:26:39.636669 containerd[1591]: time="2025-12-16T03:26:39.636645553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 03:26:39.641972 containerd[1591]: time="2025-12-16T03:26:39.641905615Z" level=info msg="CreateContainer within sandbox \"d64c5204e38bdaa6c4e1c96b57e37d55aef3f49c26ae63f236af92449c364b52\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 03:26:39.653076 containerd[1591]: time="2025-12-16T03:26:39.653042911Z" level=info msg="Container 0025fdba48009637796018382ec7038e7989e92f2fd2ae3a559a2bcbd3f9f2d2: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:26:39.664536 containerd[1591]: time="2025-12-16T03:26:39.664472210Z" level=info msg="CreateContainer within sandbox \"d64c5204e38bdaa6c4e1c96b57e37d55aef3f49c26ae63f236af92449c364b52\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0025fdba48009637796018382ec7038e7989e92f2fd2ae3a559a2bcbd3f9f2d2\"" Dec 16 03:26:39.666607 containerd[1591]: time="2025-12-16T03:26:39.666562141Z" level=info msg="StartContainer for \"0025fdba48009637796018382ec7038e7989e92f2fd2ae3a559a2bcbd3f9f2d2\"" Dec 16 03:26:39.668274 containerd[1591]: time="2025-12-16T03:26:39.668237260Z" level=info msg="connecting to shim 0025fdba48009637796018382ec7038e7989e92f2fd2ae3a559a2bcbd3f9f2d2" address="unix:///run/containerd/s/229523360a7dd45da10ac69471ac8773444bf65c36950c1f599973cdd5bfe77a" protocol=ttrpc version=3 Dec 16 03:26:39.707215 systemd[1]: Started cri-containerd-0025fdba48009637796018382ec7038e7989e92f2fd2ae3a559a2bcbd3f9f2d2.scope - libcontainer container 0025fdba48009637796018382ec7038e7989e92f2fd2ae3a559a2bcbd3f9f2d2. Dec 16 03:26:39.761000 audit: BPF prog-id=169 op=LOAD Dec 16 03:26:39.761000 audit[3505]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3370 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:39.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030323566646261343830303936333737393630313833383265633730 Dec 16 03:26:39.761000 audit: BPF prog-id=170 op=LOAD Dec 16 03:26:39.761000 audit[3505]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3370 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:39.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030323566646261343830303936333737393630313833383265633730 Dec 16 03:26:39.761000 audit: BPF prog-id=170 op=UNLOAD Dec 16 03:26:39.761000 audit[3505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:39.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030323566646261343830303936333737393630313833383265633730 Dec 16 03:26:39.761000 audit: BPF prog-id=169 op=UNLOAD Dec 16 03:26:39.761000 audit[3505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:39.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030323566646261343830303936333737393630313833383265633730 Dec 16 03:26:39.761000 audit: BPF prog-id=171 op=LOAD Dec 16 03:26:39.761000 audit[3505]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3370 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:39.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030323566646261343830303936333737393630313833383265633730 Dec 16 03:26:39.797431 containerd[1591]: time="2025-12-16T03:26:39.797321935Z" level=info msg="StartContainer for \"0025fdba48009637796018382ec7038e7989e92f2fd2ae3a559a2bcbd3f9f2d2\" returns successfully" Dec 16 03:26:39.813892 systemd[1]: cri-containerd-0025fdba48009637796018382ec7038e7989e92f2fd2ae3a559a2bcbd3f9f2d2.scope: Deactivated successfully. Dec 16 03:26:39.818601 containerd[1591]: time="2025-12-16T03:26:39.818551876Z" level=info msg="received container exit event container_id:\"0025fdba48009637796018382ec7038e7989e92f2fd2ae3a559a2bcbd3f9f2d2\" id:\"0025fdba48009637796018382ec7038e7989e92f2fd2ae3a559a2bcbd3f9f2d2\" pid:3518 exited_at:{seconds:1765855599 nanos:817929433}" Dec 16 03:26:39.818000 audit: BPF prog-id=171 op=UNLOAD Dec 16 03:26:39.859591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0025fdba48009637796018382ec7038e7989e92f2fd2ae3a559a2bcbd3f9f2d2-rootfs.mount: Deactivated successfully. Dec 16 03:26:40.076497 kubelet[2854]: E1216 03:26:40.075067 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mjkcx" podUID="d069b10a-0bb5-4869-a283-bc34fbcea4f8" Dec 16 03:26:40.213811 kubelet[2854]: I1216 03:26:40.213771 2854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:26:40.235493 kubelet[2854]: I1216 03:26:40.235428 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bb94b7565-hs4qt" podStartSLOduration=2.809575995 podStartE2EDuration="5.235406633s" podCreationTimestamp="2025-12-16 03:26:35 +0000 UTC" firstStartedPulling="2025-12-16 03:26:36.121976585 +0000 UTC m=+22.312177435" lastFinishedPulling="2025-12-16 03:26:38.54780721 +0000 UTC m=+24.738008073" observedRunningTime="2025-12-16 03:26:39.320744984 +0000 UTC m=+25.510945845" watchObservedRunningTime="2025-12-16 03:26:40.235406633 +0000 UTC m=+26.425607494" Dec 16 03:26:41.220901 containerd[1591]: time="2025-12-16T03:26:41.220778416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 03:26:42.078461 kubelet[2854]: E1216 03:26:42.078393 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mjkcx" podUID="d069b10a-0bb5-4869-a283-bc34fbcea4f8" Dec 16 03:26:44.076237 kubelet[2854]: E1216 03:26:44.075762 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mjkcx" podUID="d069b10a-0bb5-4869-a283-bc34fbcea4f8" Dec 16 03:26:44.787607 containerd[1591]: time="2025-12-16T03:26:44.787549313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:44.789014 containerd[1591]: time="2025-12-16T03:26:44.788901107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 03:26:44.790033 containerd[1591]: time="2025-12-16T03:26:44.789988119Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:44.792769 containerd[1591]: time="2025-12-16T03:26:44.792707993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:44.793939 containerd[1591]: time="2025-12-16T03:26:44.793873556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.573018517s" Dec 16 03:26:44.793939 containerd[1591]: time="2025-12-16T03:26:44.793932976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 03:26:44.799382 containerd[1591]: time="2025-12-16T03:26:44.799346183Z" level=info msg="CreateContainer within sandbox \"d64c5204e38bdaa6c4e1c96b57e37d55aef3f49c26ae63f236af92449c364b52\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 03:26:44.812087 containerd[1591]: time="2025-12-16T03:26:44.812049997Z" level=info msg="Container 9c639059a24eb3273cae58496e4e807363bbf00f5c2f15d0bcae34cc12e14fc5: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:26:44.826992 containerd[1591]: time="2025-12-16T03:26:44.826944901Z" level=info msg="CreateContainer within sandbox \"d64c5204e38bdaa6c4e1c96b57e37d55aef3f49c26ae63f236af92449c364b52\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9c639059a24eb3273cae58496e4e807363bbf00f5c2f15d0bcae34cc12e14fc5\"" Dec 16 03:26:44.827840 containerd[1591]: time="2025-12-16T03:26:44.827783719Z" level=info msg="StartContainer for \"9c639059a24eb3273cae58496e4e807363bbf00f5c2f15d0bcae34cc12e14fc5\"" Dec 16 03:26:44.830081 containerd[1591]: time="2025-12-16T03:26:44.830033463Z" level=info msg="connecting to shim 9c639059a24eb3273cae58496e4e807363bbf00f5c2f15d0bcae34cc12e14fc5" address="unix:///run/containerd/s/229523360a7dd45da10ac69471ac8773444bf65c36950c1f599973cdd5bfe77a" protocol=ttrpc version=3 Dec 16 03:26:44.863170 systemd[1]: Started cri-containerd-9c639059a24eb3273cae58496e4e807363bbf00f5c2f15d0bcae34cc12e14fc5.scope - libcontainer container 9c639059a24eb3273cae58496e4e807363bbf00f5c2f15d0bcae34cc12e14fc5. Dec 16 03:26:44.933000 audit: BPF prog-id=172 op=LOAD Dec 16 03:26:44.940102 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 16 03:26:44.940228 kernel: audit: type=1334 audit(1765855604.933:552): prog-id=172 op=LOAD Dec 16 03:26:44.933000 audit[3563]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3370 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:44.976948 kernel: audit: type=1300 audit(1765855604.933:552): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3370 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:44.977043 kernel: audit: type=1327 audit(1765855604.933:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363339303539613234656233323733636165353834393665346538 Dec 16 03:26:44.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363339303539613234656233323733636165353834393665346538 Dec 16 03:26:45.012806 kernel: audit: type=1334 audit(1765855604.933:553): prog-id=173 op=LOAD Dec 16 03:26:44.933000 audit: BPF prog-id=173 op=LOAD Dec 16 03:26:44.933000 audit[3563]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3370 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:44.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363339303539613234656233323733636165353834393665346538 Dec 16 03:26:45.071409 kernel: audit: type=1300 audit(1765855604.933:553): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3370 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:45.071504 kernel: audit: type=1327 audit(1765855604.933:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363339303539613234656233323733636165353834393665346538 Dec 16 03:26:45.079166 kernel: audit: type=1334 audit(1765855604.933:554): prog-id=173 op=UNLOAD Dec 16 03:26:44.933000 audit: BPF prog-id=173 op=UNLOAD Dec 16 03:26:44.933000 audit[3563]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:45.137437 kernel: audit: type=1300 audit(1765855604.933:554): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:45.137625 kernel: audit: type=1327 audit(1765855604.933:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363339303539613234656233323733636165353834393665346538 Dec 16 03:26:44.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363339303539613234656233323733636165353834393665346538 Dec 16 03:26:45.145121 kernel: audit: type=1334 audit(1765855604.933:555): prog-id=172 op=UNLOAD Dec 16 03:26:44.933000 audit: BPF prog-id=172 op=UNLOAD Dec 16 03:26:44.933000 audit[3563]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:44.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363339303539613234656233323733636165353834393665346538 Dec 16 03:26:44.934000 audit: BPF prog-id=174 op=LOAD Dec 16 03:26:44.934000 audit[3563]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3370 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:44.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363339303539613234656233323733636165353834393665346538 Dec 16 03:26:45.149662 containerd[1591]: time="2025-12-16T03:26:45.149332991Z" level=info msg="StartContainer for \"9c639059a24eb3273cae58496e4e807363bbf00f5c2f15d0bcae34cc12e14fc5\" returns successfully" Dec 16 03:26:46.074979 kubelet[2854]: E1216 03:26:46.074898 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mjkcx" podUID="d069b10a-0bb5-4869-a283-bc34fbcea4f8" Dec 16 03:26:46.170403 containerd[1591]: time="2025-12-16T03:26:46.170346280Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 03:26:46.174333 systemd[1]: cri-containerd-9c639059a24eb3273cae58496e4e807363bbf00f5c2f15d0bcae34cc12e14fc5.scope: Deactivated successfully. Dec 16 03:26:46.175303 systemd[1]: cri-containerd-9c639059a24eb3273cae58496e4e807363bbf00f5c2f15d0bcae34cc12e14fc5.scope: Consumed 718ms CPU time, 195.2M memory peak, 171.3M written to disk. Dec 16 03:26:46.177000 audit: BPF prog-id=174 op=UNLOAD Dec 16 03:26:46.178528 containerd[1591]: time="2025-12-16T03:26:46.178185089Z" level=info msg="received container exit event container_id:\"9c639059a24eb3273cae58496e4e807363bbf00f5c2f15d0bcae34cc12e14fc5\" id:\"9c639059a24eb3273cae58496e4e807363bbf00f5c2f15d0bcae34cc12e14fc5\" pid:3577 exited_at:{seconds:1765855606 nanos:177683359}" Dec 16 03:26:46.211713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c639059a24eb3273cae58496e4e807363bbf00f5c2f15d0bcae34cc12e14fc5-rootfs.mount: Deactivated successfully. Dec 16 03:26:46.278600 kubelet[2854]: I1216 03:26:46.278521 2854 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 03:26:46.418275 systemd[1]: Created slice kubepods-burstable-pod2ffe169c_1680_458a_8a3d_3c85e36cea72.slice - libcontainer container kubepods-burstable-pod2ffe169c_1680_458a_8a3d_3c85e36cea72.slice. Dec 16 03:26:46.488181 kubelet[2854]: I1216 03:26:46.488118 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltbcm\" (UniqueName: \"kubernetes.io/projected/2ffe169c-1680-458a-8a3d-3c85e36cea72-kube-api-access-ltbcm\") pod \"coredns-674b8bbfcf-md7c7\" (UID: \"2ffe169c-1680-458a-8a3d-3c85e36cea72\") " pod="kube-system/coredns-674b8bbfcf-md7c7" Dec 16 03:26:46.488181 kubelet[2854]: I1216 03:26:46.488179 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ffe169c-1680-458a-8a3d-3c85e36cea72-config-volume\") pod \"coredns-674b8bbfcf-md7c7\" (UID: \"2ffe169c-1680-458a-8a3d-3c85e36cea72\") " pod="kube-system/coredns-674b8bbfcf-md7c7" Dec 16 03:26:46.755171 containerd[1591]: time="2025-12-16T03:26:46.755026446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-md7c7,Uid:2ffe169c-1680-458a-8a3d-3c85e36cea72,Namespace:kube-system,Attempt:0,}" Dec 16 03:26:46.793014 kubelet[2854]: I1216 03:26:46.790250 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f116a091-4f95-4334-821a-705964657507-tigera-ca-bundle\") pod \"calico-kube-controllers-54c6dbfbf4-h7k9m\" (UID: \"f116a091-4f95-4334-821a-705964657507\") " pod="calico-system/calico-kube-controllers-54c6dbfbf4-h7k9m" Dec 16 03:26:46.793014 kubelet[2854]: I1216 03:26:46.790329 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl6xp\" (UniqueName: \"kubernetes.io/projected/f116a091-4f95-4334-821a-705964657507-kube-api-access-fl6xp\") pod \"calico-kube-controllers-54c6dbfbf4-h7k9m\" (UID: \"f116a091-4f95-4334-821a-705964657507\") " pod="calico-system/calico-kube-controllers-54c6dbfbf4-h7k9m" Dec 16 03:26:46.820160 systemd[1]: Created slice kubepods-besteffort-podf116a091_4f95_4334_821a_705964657507.slice - libcontainer container kubepods-besteffort-podf116a091_4f95_4334_821a_705964657507.slice. Dec 16 03:26:46.841698 systemd[1]: Created slice kubepods-burstable-podce64fe8f_be7b_4c39_b3cb_c63b985b6c99.slice - libcontainer container kubepods-burstable-podce64fe8f_be7b_4c39_b3cb_c63b985b6c99.slice. Dec 16 03:26:46.858441 systemd[1]: Created slice kubepods-besteffort-pod964e7c44_1e18_4e5b_8b6a_1130081b8647.slice - libcontainer container kubepods-besteffort-pod964e7c44_1e18_4e5b_8b6a_1130081b8647.slice. Dec 16 03:26:46.874199 systemd[1]: Created slice kubepods-besteffort-podf962c147_01d6_4802_8a66_a07063536cc6.slice - libcontainer container kubepods-besteffort-podf962c147_01d6_4802_8a66_a07063536cc6.slice. Dec 16 03:26:46.892014 kubelet[2854]: I1216 03:26:46.891375 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce64fe8f-be7b-4c39-b3cb-c63b985b6c99-config-volume\") pod \"coredns-674b8bbfcf-5db2f\" (UID: \"ce64fe8f-be7b-4c39-b3cb-c63b985b6c99\") " pod="kube-system/coredns-674b8bbfcf-5db2f" Dec 16 03:26:46.893074 kubelet[2854]: I1216 03:26:46.893028 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7t6l\" (UniqueName: \"kubernetes.io/projected/58c0a3f8-05f5-4d50-84f5-6c400d162736-kube-api-access-k7t6l\") pod \"calico-apiserver-984d7f7b9-qw7fg\" (UID: \"58c0a3f8-05f5-4d50-84f5-6c400d162736\") " pod="calico-apiserver/calico-apiserver-984d7f7b9-qw7fg" Dec 16 03:26:46.893364 kubelet[2854]: I1216 03:26:46.893264 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9cxw\" (UniqueName: \"kubernetes.io/projected/ce64fe8f-be7b-4c39-b3cb-c63b985b6c99-kube-api-access-q9cxw\") pod \"coredns-674b8bbfcf-5db2f\" (UID: \"ce64fe8f-be7b-4c39-b3cb-c63b985b6c99\") " pod="kube-system/coredns-674b8bbfcf-5db2f" Dec 16 03:26:46.893527 kubelet[2854]: I1216 03:26:46.893477 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/964e7c44-1e18-4e5b-8b6a-1130081b8647-config\") pod \"goldmane-666569f655-2xvkp\" (UID: \"964e7c44-1e18-4e5b-8b6a-1130081b8647\") " pod="calico-system/goldmane-666569f655-2xvkp" Dec 16 03:26:46.894266 kubelet[2854]: I1216 03:26:46.893527 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/964e7c44-1e18-4e5b-8b6a-1130081b8647-goldmane-ca-bundle\") pod \"goldmane-666569f655-2xvkp\" (UID: \"964e7c44-1e18-4e5b-8b6a-1130081b8647\") " pod="calico-system/goldmane-666569f655-2xvkp" Dec 16 03:26:46.894484 kubelet[2854]: I1216 03:26:46.894389 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/964e7c44-1e18-4e5b-8b6a-1130081b8647-goldmane-key-pair\") pod \"goldmane-666569f655-2xvkp\" (UID: \"964e7c44-1e18-4e5b-8b6a-1130081b8647\") " pod="calico-system/goldmane-666569f655-2xvkp" Dec 16 03:26:46.894607 kubelet[2854]: I1216 03:26:46.894440 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/574af3c5-d781-4fa8-842f-04bccc0c5fcf-calico-apiserver-certs\") pod \"calico-apiserver-984d7f7b9-k94jh\" (UID: \"574af3c5-d781-4fa8-842f-04bccc0c5fcf\") " pod="calico-apiserver/calico-apiserver-984d7f7b9-k94jh" Dec 16 03:26:46.894696 kubelet[2854]: I1216 03:26:46.894599 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26x97\" (UniqueName: \"kubernetes.io/projected/f962c147-01d6-4802-8a66-a07063536cc6-kube-api-access-26x97\") pod \"whisker-8545bc7cf8-hdpsg\" (UID: \"f962c147-01d6-4802-8a66-a07063536cc6\") " pod="calico-system/whisker-8545bc7cf8-hdpsg" Dec 16 03:26:46.895241 kubelet[2854]: I1216 03:26:46.895200 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f962c147-01d6-4802-8a66-a07063536cc6-whisker-backend-key-pair\") pod \"whisker-8545bc7cf8-hdpsg\" (UID: \"f962c147-01d6-4802-8a66-a07063536cc6\") " pod="calico-system/whisker-8545bc7cf8-hdpsg" Dec 16 03:26:46.895980 kubelet[2854]: I1216 03:26:46.895267 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/58c0a3f8-05f5-4d50-84f5-6c400d162736-calico-apiserver-certs\") pod \"calico-apiserver-984d7f7b9-qw7fg\" (UID: \"58c0a3f8-05f5-4d50-84f5-6c400d162736\") " pod="calico-apiserver/calico-apiserver-984d7f7b9-qw7fg" Dec 16 03:26:46.895980 kubelet[2854]: I1216 03:26:46.895309 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f962c147-01d6-4802-8a66-a07063536cc6-whisker-ca-bundle\") pod \"whisker-8545bc7cf8-hdpsg\" (UID: \"f962c147-01d6-4802-8a66-a07063536cc6\") " pod="calico-system/whisker-8545bc7cf8-hdpsg" Dec 16 03:26:46.895980 kubelet[2854]: I1216 03:26:46.895388 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm9d8\" (UniqueName: \"kubernetes.io/projected/964e7c44-1e18-4e5b-8b6a-1130081b8647-kube-api-access-hm9d8\") pod \"goldmane-666569f655-2xvkp\" (UID: \"964e7c44-1e18-4e5b-8b6a-1130081b8647\") " pod="calico-system/goldmane-666569f655-2xvkp" Dec 16 03:26:46.895980 kubelet[2854]: I1216 03:26:46.895429 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzp8k\" (UniqueName: \"kubernetes.io/projected/574af3c5-d781-4fa8-842f-04bccc0c5fcf-kube-api-access-nzp8k\") pod \"calico-apiserver-984d7f7b9-k94jh\" (UID: \"574af3c5-d781-4fa8-842f-04bccc0c5fcf\") " pod="calico-apiserver/calico-apiserver-984d7f7b9-k94jh" Dec 16 03:26:46.902098 systemd[1]: Created slice kubepods-besteffort-pod574af3c5_d781_4fa8_842f_04bccc0c5fcf.slice - libcontainer container kubepods-besteffort-pod574af3c5_d781_4fa8_842f_04bccc0c5fcf.slice. Dec 16 03:26:46.949746 systemd[1]: Created slice kubepods-besteffort-pod58c0a3f8_05f5_4d50_84f5_6c400d162736.slice - libcontainer container kubepods-besteffort-pod58c0a3f8_05f5_4d50_84f5_6c400d162736.slice. Dec 16 03:26:46.977727 containerd[1591]: time="2025-12-16T03:26:46.977668803Z" level=error msg="Failed to destroy network for sandbox \"7d68946504e80e469e12f1d49698f5411cfd8771047703a31f3a791be660c3b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:46.981136 containerd[1591]: time="2025-12-16T03:26:46.981079969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-md7c7,Uid:2ffe169c-1680-458a-8a3d-3c85e36cea72,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d68946504e80e469e12f1d49698f5411cfd8771047703a31f3a791be660c3b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:46.982514 kubelet[2854]: E1216 03:26:46.982054 2854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d68946504e80e469e12f1d49698f5411cfd8771047703a31f3a791be660c3b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:46.982514 kubelet[2854]: E1216 03:26:46.982140 2854 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d68946504e80e469e12f1d49698f5411cfd8771047703a31f3a791be660c3b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-md7c7" Dec 16 03:26:46.982514 kubelet[2854]: E1216 03:26:46.982173 2854 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d68946504e80e469e12f1d49698f5411cfd8771047703a31f3a791be660c3b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-md7c7" Dec 16 03:26:46.982321 systemd[1]: run-netns-cni\x2de2effe6b\x2d1f2f\x2d0fb5\x2dbc6b\x2d2a3eab14b333.mount: Deactivated successfully. Dec 16 03:26:46.985015 kubelet[2854]: E1216 03:26:46.982235 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-md7c7_kube-system(2ffe169c-1680-458a-8a3d-3c85e36cea72)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-md7c7_kube-system(2ffe169c-1680-458a-8a3d-3c85e36cea72)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d68946504e80e469e12f1d49698f5411cfd8771047703a31f3a791be660c3b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-md7c7" podUID="2ffe169c-1680-458a-8a3d-3c85e36cea72" Dec 16 03:26:47.136711 containerd[1591]: time="2025-12-16T03:26:47.136653277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c6dbfbf4-h7k9m,Uid:f116a091-4f95-4334-821a-705964657507,Namespace:calico-system,Attempt:0,}" Dec 16 03:26:47.154227 containerd[1591]: time="2025-12-16T03:26:47.154032651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5db2f,Uid:ce64fe8f-be7b-4c39-b3cb-c63b985b6c99,Namespace:kube-system,Attempt:0,}" Dec 16 03:26:47.165208 containerd[1591]: time="2025-12-16T03:26:47.165148593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2xvkp,Uid:964e7c44-1e18-4e5b-8b6a-1130081b8647,Namespace:calico-system,Attempt:0,}" Dec 16 03:26:47.196396 containerd[1591]: time="2025-12-16T03:26:47.196157185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8545bc7cf8-hdpsg,Uid:f962c147-01d6-4802-8a66-a07063536cc6,Namespace:calico-system,Attempt:0,}" Dec 16 03:26:47.226760 containerd[1591]: time="2025-12-16T03:26:47.226710321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-984d7f7b9-k94jh,Uid:574af3c5-d781-4fa8-842f-04bccc0c5fcf,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:26:47.264601 containerd[1591]: time="2025-12-16T03:26:47.264558133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-984d7f7b9-qw7fg,Uid:58c0a3f8-05f5-4d50-84f5-6c400d162736,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:26:47.297679 containerd[1591]: time="2025-12-16T03:26:47.297615315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 03:26:47.448027 containerd[1591]: time="2025-12-16T03:26:47.447822240Z" level=error msg="Failed to destroy network for sandbox \"d1a8d32d08b526b36dd69c111eac00c63ca14a1b56d6556e1b076ded30c0ebee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.453329 systemd[1]: run-netns-cni\x2dc5e98d09\x2d9da6\x2d1159\x2d8090\x2d8294e863d7e6.mount: Deactivated successfully. Dec 16 03:26:47.458682 containerd[1591]: time="2025-12-16T03:26:47.457092375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c6dbfbf4-h7k9m,Uid:f116a091-4f95-4334-821a-705964657507,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a8d32d08b526b36dd69c111eac00c63ca14a1b56d6556e1b076ded30c0ebee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.458855 kubelet[2854]: E1216 03:26:47.457371 2854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a8d32d08b526b36dd69c111eac00c63ca14a1b56d6556e1b076ded30c0ebee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.458855 kubelet[2854]: E1216 03:26:47.457455 2854 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a8d32d08b526b36dd69c111eac00c63ca14a1b56d6556e1b076ded30c0ebee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54c6dbfbf4-h7k9m" Dec 16 03:26:47.458855 kubelet[2854]: E1216 03:26:47.457495 2854 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a8d32d08b526b36dd69c111eac00c63ca14a1b56d6556e1b076ded30c0ebee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54c6dbfbf4-h7k9m" Dec 16 03:26:47.459743 kubelet[2854]: E1216 03:26:47.457568 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54c6dbfbf4-h7k9m_calico-system(f116a091-4f95-4334-821a-705964657507)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54c6dbfbf4-h7k9m_calico-system(f116a091-4f95-4334-821a-705964657507)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1a8d32d08b526b36dd69c111eac00c63ca14a1b56d6556e1b076ded30c0ebee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54c6dbfbf4-h7k9m" podUID="f116a091-4f95-4334-821a-705964657507" Dec 16 03:26:47.475568 containerd[1591]: time="2025-12-16T03:26:47.475128226Z" level=error msg="Failed to destroy network for sandbox \"32da735cfc0f399fd8ab80ebee81cd6f90cb6aa93f5faab0833863953f878a69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.480438 systemd[1]: run-netns-cni\x2daa9331ef\x2d9544\x2d4d3b\x2d4f97\x2dd574218c9fa8.mount: Deactivated successfully. Dec 16 03:26:47.492557 containerd[1591]: time="2025-12-16T03:26:47.492478049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5db2f,Uid:ce64fe8f-be7b-4c39-b3cb-c63b985b6c99,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32da735cfc0f399fd8ab80ebee81cd6f90cb6aa93f5faab0833863953f878a69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.493495 kubelet[2854]: E1216 03:26:47.493035 2854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32da735cfc0f399fd8ab80ebee81cd6f90cb6aa93f5faab0833863953f878a69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.493495 kubelet[2854]: E1216 03:26:47.493404 2854 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32da735cfc0f399fd8ab80ebee81cd6f90cb6aa93f5faab0833863953f878a69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5db2f" Dec 16 03:26:47.493495 kubelet[2854]: E1216 03:26:47.493445 2854 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32da735cfc0f399fd8ab80ebee81cd6f90cb6aa93f5faab0833863953f878a69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5db2f" Dec 16 03:26:47.494098 kubelet[2854]: E1216 03:26:47.493798 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5db2f_kube-system(ce64fe8f-be7b-4c39-b3cb-c63b985b6c99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5db2f_kube-system(ce64fe8f-be7b-4c39-b3cb-c63b985b6c99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32da735cfc0f399fd8ab80ebee81cd6f90cb6aa93f5faab0833863953f878a69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5db2f" podUID="ce64fe8f-be7b-4c39-b3cb-c63b985b6c99" Dec 16 03:26:47.534118 containerd[1591]: time="2025-12-16T03:26:47.534035752Z" level=error msg="Failed to destroy network for sandbox \"a509db6b657dacc4186d6b0ccc89fbd68e20d23b315fc8a10802930d65564997\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.537641 containerd[1591]: time="2025-12-16T03:26:47.537500708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8545bc7cf8-hdpsg,Uid:f962c147-01d6-4802-8a66-a07063536cc6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a509db6b657dacc4186d6b0ccc89fbd68e20d23b315fc8a10802930d65564997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.539751 kubelet[2854]: E1216 03:26:47.538175 2854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a509db6b657dacc4186d6b0ccc89fbd68e20d23b315fc8a10802930d65564997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.539751 kubelet[2854]: E1216 03:26:47.538264 2854 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a509db6b657dacc4186d6b0ccc89fbd68e20d23b315fc8a10802930d65564997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8545bc7cf8-hdpsg" Dec 16 03:26:47.539751 kubelet[2854]: E1216 03:26:47.538297 2854 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a509db6b657dacc4186d6b0ccc89fbd68e20d23b315fc8a10802930d65564997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8545bc7cf8-hdpsg" Dec 16 03:26:47.540113 kubelet[2854]: E1216 03:26:47.538662 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8545bc7cf8-hdpsg_calico-system(f962c147-01d6-4802-8a66-a07063536cc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8545bc7cf8-hdpsg_calico-system(f962c147-01d6-4802-8a66-a07063536cc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a509db6b657dacc4186d6b0ccc89fbd68e20d23b315fc8a10802930d65564997\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8545bc7cf8-hdpsg" podUID="f962c147-01d6-4802-8a66-a07063536cc6" Dec 16 03:26:47.545451 containerd[1591]: time="2025-12-16T03:26:47.545404454Z" level=error msg="Failed to destroy network for sandbox \"7f5b42851bb5e07d46b6604d29a42d3bdabe6347974a9513a8301c8137b22598\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.548594 containerd[1591]: time="2025-12-16T03:26:47.548447914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2xvkp,Uid:964e7c44-1e18-4e5b-8b6a-1130081b8647,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5b42851bb5e07d46b6604d29a42d3bdabe6347974a9513a8301c8137b22598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.548840 kubelet[2854]: E1216 03:26:47.548711 2854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5b42851bb5e07d46b6604d29a42d3bdabe6347974a9513a8301c8137b22598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.548840 kubelet[2854]: E1216 03:26:47.548773 2854 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5b42851bb5e07d46b6604d29a42d3bdabe6347974a9513a8301c8137b22598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2xvkp" Dec 16 03:26:47.548840 kubelet[2854]: E1216 03:26:47.548806 2854 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5b42851bb5e07d46b6604d29a42d3bdabe6347974a9513a8301c8137b22598\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2xvkp" Dec 16 03:26:47.549531 kubelet[2854]: E1216 03:26:47.548874 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-2xvkp_calico-system(964e7c44-1e18-4e5b-8b6a-1130081b8647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-2xvkp_calico-system(964e7c44-1e18-4e5b-8b6a-1130081b8647)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f5b42851bb5e07d46b6604d29a42d3bdabe6347974a9513a8301c8137b22598\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-2xvkp" podUID="964e7c44-1e18-4e5b-8b6a-1130081b8647" Dec 16 03:26:47.568122 containerd[1591]: time="2025-12-16T03:26:47.568072917Z" level=error msg="Failed to destroy network for sandbox \"2c2dcaf120f4dec2d565b611f6df7261cc6597568bf587091ab5132bda1b5f81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.572485 containerd[1591]: time="2025-12-16T03:26:47.571434373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-984d7f7b9-qw7fg,Uid:58c0a3f8-05f5-4d50-84f5-6c400d162736,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c2dcaf120f4dec2d565b611f6df7261cc6597568bf587091ab5132bda1b5f81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.572671 kubelet[2854]: E1216 03:26:47.571692 2854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c2dcaf120f4dec2d565b611f6df7261cc6597568bf587091ab5132bda1b5f81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.572671 kubelet[2854]: E1216 03:26:47.571753 2854 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c2dcaf120f4dec2d565b611f6df7261cc6597568bf587091ab5132bda1b5f81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-984d7f7b9-qw7fg" Dec 16 03:26:47.572671 kubelet[2854]: E1216 03:26:47.571788 2854 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c2dcaf120f4dec2d565b611f6df7261cc6597568bf587091ab5132bda1b5f81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-984d7f7b9-qw7fg" Dec 16 03:26:47.572841 kubelet[2854]: E1216 03:26:47.571841 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-984d7f7b9-qw7fg_calico-apiserver(58c0a3f8-05f5-4d50-84f5-6c400d162736)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-984d7f7b9-qw7fg_calico-apiserver(58c0a3f8-05f5-4d50-84f5-6c400d162736)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c2dcaf120f4dec2d565b611f6df7261cc6597568bf587091ab5132bda1b5f81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-qw7fg" podUID="58c0a3f8-05f5-4d50-84f5-6c400d162736" Dec 16 03:26:47.581166 containerd[1591]: time="2025-12-16T03:26:47.581029715Z" level=error msg="Failed to destroy network for sandbox \"20dc767e7b81d246e2e25c720ea3009ae34ef6d8f36d17aa934ca7c11832b0db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.585778 containerd[1591]: time="2025-12-16T03:26:47.585719995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-984d7f7b9-k94jh,Uid:574af3c5-d781-4fa8-842f-04bccc0c5fcf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20dc767e7b81d246e2e25c720ea3009ae34ef6d8f36d17aa934ca7c11832b0db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.586373 kubelet[2854]: E1216 03:26:47.586206 2854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20dc767e7b81d246e2e25c720ea3009ae34ef6d8f36d17aa934ca7c11832b0db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:47.586373 kubelet[2854]: E1216 03:26:47.586281 2854 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20dc767e7b81d246e2e25c720ea3009ae34ef6d8f36d17aa934ca7c11832b0db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-984d7f7b9-k94jh" Dec 16 03:26:47.586373 kubelet[2854]: E1216 03:26:47.586319 2854 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20dc767e7b81d246e2e25c720ea3009ae34ef6d8f36d17aa934ca7c11832b0db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-984d7f7b9-k94jh" Dec 16 03:26:47.587089 kubelet[2854]: E1216 03:26:47.586681 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-984d7f7b9-k94jh_calico-apiserver(574af3c5-d781-4fa8-842f-04bccc0c5fcf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-984d7f7b9-k94jh_calico-apiserver(574af3c5-d781-4fa8-842f-04bccc0c5fcf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20dc767e7b81d246e2e25c720ea3009ae34ef6d8f36d17aa934ca7c11832b0db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-k94jh" podUID="574af3c5-d781-4fa8-842f-04bccc0c5fcf" Dec 16 03:26:48.093423 systemd[1]: Created slice kubepods-besteffort-podd069b10a_0bb5_4869_a283_bc34fbcea4f8.slice - libcontainer container kubepods-besteffort-podd069b10a_0bb5_4869_a283_bc34fbcea4f8.slice. Dec 16 03:26:48.100690 containerd[1591]: time="2025-12-16T03:26:48.100608640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjkcx,Uid:d069b10a-0bb5-4869-a283-bc34fbcea4f8,Namespace:calico-system,Attempt:0,}" Dec 16 03:26:48.217356 systemd[1]: run-netns-cni\x2d55d35413\x2d3014\x2da828\x2d00c3\x2d42e7274a765d.mount: Deactivated successfully. Dec 16 03:26:48.217498 systemd[1]: run-netns-cni\x2d879b8139\x2d9952\x2dbca1\x2d6c3c\x2d7eaf1d648f5c.mount: Deactivated successfully. Dec 16 03:26:48.217597 systemd[1]: run-netns-cni\x2d1a5f60a0\x2de55d\x2d4873\x2d621d\x2dfe4a6e4fafcf.mount: Deactivated successfully. Dec 16 03:26:48.217686 systemd[1]: run-netns-cni\x2d39151cf4\x2d1695\x2da02a\x2d42cf\x2d215c9fb41fc8.mount: Deactivated successfully. Dec 16 03:26:48.258107 containerd[1591]: time="2025-12-16T03:26:48.258040079Z" level=error msg="Failed to destroy network for sandbox \"49d1037c6b06f5c33fea48d5429f17a990a8403ffb0fa800e3a684cbf6ef4c3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:48.263301 systemd[1]: run-netns-cni\x2dd69cbc08\x2d6aaa\x2db1fd\x2d15a7\x2d941b81f3e517.mount: Deactivated successfully. Dec 16 03:26:48.267689 containerd[1591]: time="2025-12-16T03:26:48.267322834Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjkcx,Uid:d069b10a-0bb5-4869-a283-bc34fbcea4f8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"49d1037c6b06f5c33fea48d5429f17a990a8403ffb0fa800e3a684cbf6ef4c3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:48.269825 kubelet[2854]: E1216 03:26:48.268380 2854 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49d1037c6b06f5c33fea48d5429f17a990a8403ffb0fa800e3a684cbf6ef4c3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:26:48.269825 kubelet[2854]: E1216 03:26:48.268456 2854 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49d1037c6b06f5c33fea48d5429f17a990a8403ffb0fa800e3a684cbf6ef4c3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mjkcx" Dec 16 03:26:48.269825 kubelet[2854]: E1216 03:26:48.268499 2854 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49d1037c6b06f5c33fea48d5429f17a990a8403ffb0fa800e3a684cbf6ef4c3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mjkcx" Dec 16 03:26:48.270165 kubelet[2854]: E1216 03:26:48.268572 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mjkcx_calico-system(d069b10a-0bb5-4869-a283-bc34fbcea4f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mjkcx_calico-system(d069b10a-0bb5-4869-a283-bc34fbcea4f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49d1037c6b06f5c33fea48d5429f17a990a8403ffb0fa800e3a684cbf6ef4c3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mjkcx" podUID="d069b10a-0bb5-4869-a283-bc34fbcea4f8" Dec 16 03:26:50.075176 kubelet[2854]: I1216 03:26:50.075126 2854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:26:50.179000 audit[3844]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:50.202020 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 03:26:50.202155 kernel: audit: type=1325 audit(1765855610.179:558): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:50.179000 audit[3844]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff2de53050 a2=0 a3=7fff2de5303c items=0 ppid=2998 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:50.235409 kernel: audit: type=1300 audit(1765855610.179:558): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff2de53050 a2=0 a3=7fff2de5303c items=0 ppid=2998 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:50.235526 kernel: audit: type=1327 audit(1765855610.179:558): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:50.179000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:50.235000 audit[3844]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:50.267007 kernel: audit: type=1325 audit(1765855610.235:559): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:50.267103 kernel: audit: type=1300 audit(1765855610.235:559): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff2de53050 a2=0 a3=7fff2de5303c items=0 ppid=2998 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:50.235000 audit[3844]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff2de53050 a2=0 a3=7fff2de5303c items=0 ppid=2998 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:50.235000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:50.315403 kernel: audit: type=1327 audit(1765855610.235:559): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:54.478532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2771947499.mount: Deactivated successfully. Dec 16 03:26:54.505106 containerd[1591]: time="2025-12-16T03:26:54.505044941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:54.506352 containerd[1591]: time="2025-12-16T03:26:54.506316734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 03:26:54.507468 containerd[1591]: time="2025-12-16T03:26:54.507406567Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:54.510007 containerd[1591]: time="2025-12-16T03:26:54.509943033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:26:54.510876 containerd[1591]: time="2025-12-16T03:26:54.510810871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.211804439s" Dec 16 03:26:54.510876 containerd[1591]: time="2025-12-16T03:26:54.510865332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 03:26:54.538274 containerd[1591]: time="2025-12-16T03:26:54.538208483Z" level=info msg="CreateContainer within sandbox \"d64c5204e38bdaa6c4e1c96b57e37d55aef3f49c26ae63f236af92449c364b52\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 03:26:54.551573 containerd[1591]: time="2025-12-16T03:26:54.550388207Z" level=info msg="Container 16fbd69d2c16ce1affe0e44dcfb7341b5b43e92a3d184845e85daea2a3d7b1d0: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:26:54.561209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount683281702.mount: Deactivated successfully. Dec 16 03:26:54.567476 containerd[1591]: time="2025-12-16T03:26:54.567413717Z" level=info msg="CreateContainer within sandbox \"d64c5204e38bdaa6c4e1c96b57e37d55aef3f49c26ae63f236af92449c364b52\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"16fbd69d2c16ce1affe0e44dcfb7341b5b43e92a3d184845e85daea2a3d7b1d0\"" Dec 16 03:26:54.568215 containerd[1591]: time="2025-12-16T03:26:54.568177775Z" level=info msg="StartContainer for \"16fbd69d2c16ce1affe0e44dcfb7341b5b43e92a3d184845e85daea2a3d7b1d0\"" Dec 16 03:26:54.570685 containerd[1591]: time="2025-12-16T03:26:54.570599757Z" level=info msg="connecting to shim 16fbd69d2c16ce1affe0e44dcfb7341b5b43e92a3d184845e85daea2a3d7b1d0" address="unix:///run/containerd/s/229523360a7dd45da10ac69471ac8773444bf65c36950c1f599973cdd5bfe77a" protocol=ttrpc version=3 Dec 16 03:26:54.600177 systemd[1]: Started cri-containerd-16fbd69d2c16ce1affe0e44dcfb7341b5b43e92a3d184845e85daea2a3d7b1d0.scope - libcontainer container 16fbd69d2c16ce1affe0e44dcfb7341b5b43e92a3d184845e85daea2a3d7b1d0. Dec 16 03:26:54.673000 audit: BPF prog-id=175 op=LOAD Dec 16 03:26:54.673000 audit[3847]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3370 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:54.711630 kernel: audit: type=1334 audit(1765855614.673:560): prog-id=175 op=LOAD Dec 16 03:26:54.711847 kernel: audit: type=1300 audit(1765855614.673:560): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3370 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:54.711894 kernel: audit: type=1327 audit(1765855614.673:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136666264363964326331366365316166666530653434646366623733 Dec 16 03:26:54.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136666264363964326331366365316166666530653434646366623733 Dec 16 03:26:54.674000 audit: BPF prog-id=176 op=LOAD Dec 16 03:26:54.747511 kernel: audit: type=1334 audit(1765855614.674:561): prog-id=176 op=LOAD Dec 16 03:26:54.674000 audit[3847]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3370 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:54.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136666264363964326331366365316166666530653434646366623733 Dec 16 03:26:54.674000 audit: BPF prog-id=176 op=UNLOAD Dec 16 03:26:54.674000 audit[3847]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:54.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136666264363964326331366365316166666530653434646366623733 Dec 16 03:26:54.674000 audit: BPF prog-id=175 op=UNLOAD Dec 16 03:26:54.674000 audit[3847]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3370 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:54.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136666264363964326331366365316166666530653434646366623733 Dec 16 03:26:54.674000 audit: BPF prog-id=177 op=LOAD Dec 16 03:26:54.674000 audit[3847]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3370 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:54.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136666264363964326331366365316166666530653434646366623733 Dec 16 03:26:54.773830 containerd[1591]: time="2025-12-16T03:26:54.773704944Z" level=info msg="StartContainer for \"16fbd69d2c16ce1affe0e44dcfb7341b5b43e92a3d184845e85daea2a3d7b1d0\" returns successfully" Dec 16 03:26:54.903784 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 03:26:54.903955 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 03:26:55.154882 kubelet[2854]: I1216 03:26:55.154816 2854 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26x97\" (UniqueName: \"kubernetes.io/projected/f962c147-01d6-4802-8a66-a07063536cc6-kube-api-access-26x97\") pod \"f962c147-01d6-4802-8a66-a07063536cc6\" (UID: \"f962c147-01d6-4802-8a66-a07063536cc6\") " Dec 16 03:26:55.155961 kubelet[2854]: I1216 03:26:55.154891 2854 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f962c147-01d6-4802-8a66-a07063536cc6-whisker-backend-key-pair\") pod \"f962c147-01d6-4802-8a66-a07063536cc6\" (UID: \"f962c147-01d6-4802-8a66-a07063536cc6\") " Dec 16 03:26:55.156972 kubelet[2854]: I1216 03:26:55.156335 2854 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f962c147-01d6-4802-8a66-a07063536cc6-whisker-ca-bundle\") pod \"f962c147-01d6-4802-8a66-a07063536cc6\" (UID: \"f962c147-01d6-4802-8a66-a07063536cc6\") " Dec 16 03:26:55.156972 kubelet[2854]: I1216 03:26:55.156858 2854 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f962c147-01d6-4802-8a66-a07063536cc6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f962c147-01d6-4802-8a66-a07063536cc6" (UID: "f962c147-01d6-4802-8a66-a07063536cc6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 03:26:55.163144 kubelet[2854]: I1216 03:26:55.163106 2854 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f962c147-01d6-4802-8a66-a07063536cc6-kube-api-access-26x97" (OuterVolumeSpecName: "kube-api-access-26x97") pod "f962c147-01d6-4802-8a66-a07063536cc6" (UID: "f962c147-01d6-4802-8a66-a07063536cc6"). InnerVolumeSpecName "kube-api-access-26x97". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 03:26:55.163983 kubelet[2854]: I1216 03:26:55.163950 2854 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f962c147-01d6-4802-8a66-a07063536cc6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f962c147-01d6-4802-8a66-a07063536cc6" (UID: "f962c147-01d6-4802-8a66-a07063536cc6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 03:26:55.257699 kubelet[2854]: I1216 03:26:55.257563 2854 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f962c147-01d6-4802-8a66-a07063536cc6-whisker-ca-bundle\") on node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 03:26:55.258150 kubelet[2854]: I1216 03:26:55.258103 2854 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26x97\" (UniqueName: \"kubernetes.io/projected/f962c147-01d6-4802-8a66-a07063536cc6-kube-api-access-26x97\") on node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 03:26:55.258364 kubelet[2854]: I1216 03:26:55.258288 2854 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f962c147-01d6-4802-8a66-a07063536cc6-whisker-backend-key-pair\") on node \"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal\" DevicePath \"\"" Dec 16 03:26:55.334461 systemd[1]: Removed slice kubepods-besteffort-podf962c147_01d6_4802_8a66_a07063536cc6.slice - libcontainer container kubepods-besteffort-podf962c147_01d6_4802_8a66_a07063536cc6.slice. Dec 16 03:26:55.362466 kubelet[2854]: I1216 03:26:55.362391 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t9hrz" podStartSLOduration=2.019013382 podStartE2EDuration="20.362368994s" podCreationTimestamp="2025-12-16 03:26:35 +0000 UTC" firstStartedPulling="2025-12-16 03:26:36.168974439 +0000 UTC m=+22.359175280" lastFinishedPulling="2025-12-16 03:26:54.512330044 +0000 UTC m=+40.702530892" observedRunningTime="2025-12-16 03:26:55.346213124 +0000 UTC m=+41.536413986" watchObservedRunningTime="2025-12-16 03:26:55.362368994 +0000 UTC m=+41.552569855" Dec 16 03:26:55.420403 systemd[1]: Created slice kubepods-besteffort-pod9487d5f8_7cdd_4f1e_a411_61521d9e1c14.slice - libcontainer container kubepods-besteffort-pod9487d5f8_7cdd_4f1e_a411_61521d9e1c14.slice. Dec 16 03:26:55.459730 kubelet[2854]: I1216 03:26:55.459658 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9487d5f8-7cdd-4f1e-a411-61521d9e1c14-whisker-ca-bundle\") pod \"whisker-767dd84f9b-gcdgn\" (UID: \"9487d5f8-7cdd-4f1e-a411-61521d9e1c14\") " pod="calico-system/whisker-767dd84f9b-gcdgn" Dec 16 03:26:55.459730 kubelet[2854]: I1216 03:26:55.459725 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9487d5f8-7cdd-4f1e-a411-61521d9e1c14-whisker-backend-key-pair\") pod \"whisker-767dd84f9b-gcdgn\" (UID: \"9487d5f8-7cdd-4f1e-a411-61521d9e1c14\") " pod="calico-system/whisker-767dd84f9b-gcdgn" Dec 16 03:26:55.459995 kubelet[2854]: I1216 03:26:55.459784 2854 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86l85\" (UniqueName: \"kubernetes.io/projected/9487d5f8-7cdd-4f1e-a411-61521d9e1c14-kube-api-access-86l85\") pod \"whisker-767dd84f9b-gcdgn\" (UID: \"9487d5f8-7cdd-4f1e-a411-61521d9e1c14\") " pod="calico-system/whisker-767dd84f9b-gcdgn" Dec 16 03:26:55.478545 systemd[1]: var-lib-kubelet-pods-f962c147\x2d01d6\x2d4802\x2d8a66\x2da07063536cc6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d26x97.mount: Deactivated successfully. Dec 16 03:26:55.478781 systemd[1]: var-lib-kubelet-pods-f962c147\x2d01d6\x2d4802\x2d8a66\x2da07063536cc6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 03:26:55.727565 containerd[1591]: time="2025-12-16T03:26:55.727410855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-767dd84f9b-gcdgn,Uid:9487d5f8-7cdd-4f1e-a411-61521d9e1c14,Namespace:calico-system,Attempt:0,}" Dec 16 03:26:55.863433 systemd-networkd[1501]: cali8f942d69732: Link UP Dec 16 03:26:55.863763 systemd-networkd[1501]: cali8f942d69732: Gained carrier Dec 16 03:26:55.889238 containerd[1591]: 2025-12-16 03:26:55.763 [INFO][3911] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:26:55.889238 containerd[1591]: 2025-12-16 03:26:55.778 [INFO][3911] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-whisker--767dd84f9b--gcdgn-eth0 whisker-767dd84f9b- calico-system 9487d5f8-7cdd-4f1e-a411-61521d9e1c14 896 0 2025-12-16 03:26:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:767dd84f9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal whisker-767dd84f9b-gcdgn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8f942d69732 [] [] }} ContainerID="e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" Namespace="calico-system" Pod="whisker-767dd84f9b-gcdgn" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-whisker--767dd84f9b--gcdgn-" Dec 16 03:26:55.889238 containerd[1591]: 2025-12-16 03:26:55.778 [INFO][3911] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" Namespace="calico-system" Pod="whisker-767dd84f9b-gcdgn" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-whisker--767dd84f9b--gcdgn-eth0" Dec 16 03:26:55.889238 containerd[1591]: 2025-12-16 03:26:55.810 [INFO][3923] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" HandleID="k8s-pod-network.e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-whisker--767dd84f9b--gcdgn-eth0" Dec 16 03:26:55.889597 containerd[1591]: 2025-12-16 03:26:55.810 [INFO][3923] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" HandleID="k8s-pod-network.e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-whisker--767dd84f9b--gcdgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", "pod":"whisker-767dd84f9b-gcdgn", "timestamp":"2025-12-16 03:26:55.810185568 +0000 UTC"}, Hostname:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:26:55.889597 containerd[1591]: 2025-12-16 03:26:55.810 [INFO][3923] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:26:55.889597 containerd[1591]: 2025-12-16 03:26:55.810 [INFO][3923] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:26:55.889597 containerd[1591]: 2025-12-16 03:26:55.810 [INFO][3923] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal' Dec 16 03:26:55.889597 containerd[1591]: 2025-12-16 03:26:55.822 [INFO][3923] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:55.889597 containerd[1591]: 2025-12-16 03:26:55.827 [INFO][3923] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:55.889597 containerd[1591]: 2025-12-16 03:26:55.832 [INFO][3923] ipam/ipam.go 511: Trying affinity for 192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:55.889597 containerd[1591]: 2025-12-16 03:26:55.834 [INFO][3923] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:55.890073 containerd[1591]: 2025-12-16 03:26:55.837 [INFO][3923] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:55.890073 containerd[1591]: 2025-12-16 03:26:55.837 [INFO][3923] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.128/26 handle="k8s-pod-network.e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:55.890073 containerd[1591]: 2025-12-16 03:26:55.838 [INFO][3923] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2 Dec 16 03:26:55.890073 containerd[1591]: 2025-12-16 03:26:55.843 [INFO][3923] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.128/26 handle="k8s-pod-network.e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:55.890073 containerd[1591]: 2025-12-16 03:26:55.850 [INFO][3923] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.129/26] block=192.168.56.128/26 handle="k8s-pod-network.e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:55.890073 containerd[1591]: 2025-12-16 03:26:55.850 [INFO][3923] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.129/26] handle="k8s-pod-network.e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:26:55.890073 containerd[1591]: 2025-12-16 03:26:55.851 [INFO][3923] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:26:55.890073 containerd[1591]: 2025-12-16 03:26:55.851 [INFO][3923] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.129/26] IPv6=[] ContainerID="e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" HandleID="k8s-pod-network.e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-whisker--767dd84f9b--gcdgn-eth0" Dec 16 03:26:55.890467 containerd[1591]: 2025-12-16 03:26:55.854 [INFO][3911] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" Namespace="calico-system" Pod="whisker-767dd84f9b-gcdgn" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-whisker--767dd84f9b--gcdgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-whisker--767dd84f9b--gcdgn-eth0", GenerateName:"whisker-767dd84f9b-", Namespace:"calico-system", SelfLink:"", UID:"9487d5f8-7cdd-4f1e-a411-61521d9e1c14", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"767dd84f9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-767dd84f9b-gcdgn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.56.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8f942d69732", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:26:55.890592 containerd[1591]: 2025-12-16 03:26:55.854 [INFO][3911] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.129/32] ContainerID="e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" Namespace="calico-system" Pod="whisker-767dd84f9b-gcdgn" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-whisker--767dd84f9b--gcdgn-eth0" Dec 16 03:26:55.890592 containerd[1591]: 2025-12-16 03:26:55.854 [INFO][3911] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f942d69732 ContainerID="e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" Namespace="calico-system" Pod="whisker-767dd84f9b-gcdgn" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-whisker--767dd84f9b--gcdgn-eth0" Dec 16 03:26:55.890592 containerd[1591]: 2025-12-16 03:26:55.863 [INFO][3911] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" Namespace="calico-system" Pod="whisker-767dd84f9b-gcdgn" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-whisker--767dd84f9b--gcdgn-eth0" Dec 16 03:26:55.891601 containerd[1591]: 2025-12-16 03:26:55.864 [INFO][3911] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" Namespace="calico-system" Pod="whisker-767dd84f9b-gcdgn" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-whisker--767dd84f9b--gcdgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-whisker--767dd84f9b--gcdgn-eth0", GenerateName:"whisker-767dd84f9b-", Namespace:"calico-system", SelfLink:"", UID:"9487d5f8-7cdd-4f1e-a411-61521d9e1c14", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"767dd84f9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2", Pod:"whisker-767dd84f9b-gcdgn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.56.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8f942d69732", MAC:"16:26:97:30:7b:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:26:55.892074 containerd[1591]: 2025-12-16 03:26:55.879 [INFO][3911] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" Namespace="calico-system" Pod="whisker-767dd84f9b-gcdgn" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-whisker--767dd84f9b--gcdgn-eth0" Dec 16 03:26:55.927173 containerd[1591]: time="2025-12-16T03:26:55.927027594Z" level=info msg="connecting to shim e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2" address="unix:///run/containerd/s/9708deafe7632340fc6156d0dd469c39c5f6c1da85b80382452fe6c80872f45a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:26:55.966163 systemd[1]: Started cri-containerd-e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2.scope - libcontainer container e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2. Dec 16 03:26:55.981000 audit: BPF prog-id=178 op=LOAD Dec 16 03:26:55.987397 kernel: kauditd_printk_skb: 11 callbacks suppressed Dec 16 03:26:55.987499 kernel: audit: type=1334 audit(1765855615.981:565): prog-id=178 op=LOAD Dec 16 03:26:55.984000 audit: BPF prog-id=179 op=LOAD Dec 16 03:26:56.001623 kernel: audit: type=1334 audit(1765855615.984:566): prog-id=179 op=LOAD Dec 16 03:26:55.984000 audit[3955]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3942 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.031506 kernel: audit: type=1300 audit(1765855615.984:566): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3942 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:55.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535666134636539303462313965663265376566383864616165393163 Dec 16 03:26:56.060147 kernel: audit: type=1327 audit(1765855615.984:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535666134636539303462313965663265376566383864616165393163 Dec 16 03:26:56.068172 kernel: audit: type=1334 audit(1765855615.984:567): prog-id=179 op=UNLOAD Dec 16 03:26:55.984000 audit: BPF prog-id=179 op=UNLOAD Dec 16 03:26:55.984000 audit[3955]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3942 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.086195 kubelet[2854]: I1216 03:26:56.081184 2854 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f962c147-01d6-4802-8a66-a07063536cc6" path="/var/lib/kubelet/pods/f962c147-01d6-4802-8a66-a07063536cc6/volumes" Dec 16 03:26:56.097428 kernel: audit: type=1300 audit(1765855615.984:567): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3942 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.097986 kernel: audit: type=1327 audit(1765855615.984:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535666134636539303462313965663265376566383864616165393163 Dec 16 03:26:55.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535666134636539303462313965663265376566383864616165393163 Dec 16 03:26:55.984000 audit: BPF prog-id=180 op=LOAD Dec 16 03:26:55.984000 audit[3955]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3942 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.144674 containerd[1591]: time="2025-12-16T03:26:56.135867870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-767dd84f9b-gcdgn,Uid:9487d5f8-7cdd-4f1e-a411-61521d9e1c14,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5fa4ce904b19ef2e7ef88daae91c95b691dede5f02364e319b273123e55a4c2\"" Dec 16 03:26:56.144674 containerd[1591]: time="2025-12-16T03:26:56.142473159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:26:56.163899 kernel: audit: type=1334 audit(1765855615.984:568): prog-id=180 op=LOAD Dec 16 03:26:56.164434 kernel: audit: type=1300 audit(1765855615.984:568): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3942 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.165040 kernel: audit: type=1327 audit(1765855615.984:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535666134636539303462313965663265376566383864616165393163 Dec 16 03:26:55.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535666134636539303462313965663265376566383864616165393163 Dec 16 03:26:55.984000 audit: BPF prog-id=181 op=LOAD Dec 16 03:26:55.984000 audit[3955]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3942 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:55.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535666134636539303462313965663265376566383864616165393163 Dec 16 03:26:55.984000 audit: BPF prog-id=181 op=UNLOAD Dec 16 03:26:55.984000 audit[3955]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3942 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:55.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535666134636539303462313965663265376566383864616165393163 Dec 16 03:26:55.984000 audit: BPF prog-id=180 op=UNLOAD Dec 16 03:26:55.984000 audit[3955]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3942 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:55.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535666134636539303462313965663265376566383864616165393163 Dec 16 03:26:55.984000 audit: BPF prog-id=182 op=LOAD Dec 16 03:26:55.984000 audit[3955]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3942 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:55.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535666134636539303462313965663265376566383864616165393163 Dec 16 03:26:56.314803 containerd[1591]: time="2025-12-16T03:26:56.314636407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:26:56.316265 containerd[1591]: time="2025-12-16T03:26:56.316213792Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:26:56.316386 containerd[1591]: time="2025-12-16T03:26:56.316323217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:26:56.316644 kubelet[2854]: E1216 03:26:56.316556 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:26:56.316644 kubelet[2854]: E1216 03:26:56.316631 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:26:56.317573 kubelet[2854]: E1216 03:26:56.316841 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f5fe397c66444a048cd1698a856f56dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-86l85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767dd84f9b-gcdgn_calico-system(9487d5f8-7cdd-4f1e-a411-61521d9e1c14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:26:56.319771 containerd[1591]: time="2025-12-16T03:26:56.319722735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:26:56.483452 containerd[1591]: time="2025-12-16T03:26:56.483392752Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:26:56.486663 containerd[1591]: time="2025-12-16T03:26:56.486534768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:26:56.486663 containerd[1591]: time="2025-12-16T03:26:56.486654573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:26:56.488626 kubelet[2854]: E1216 03:26:56.487071 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:26:56.488626 kubelet[2854]: E1216 03:26:56.487136 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:26:56.488821 kubelet[2854]: E1216 03:26:56.487325 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86l85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767dd84f9b-gcdgn_calico-system(9487d5f8-7cdd-4f1e-a411-61521d9e1c14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:26:56.489248 kubelet[2854]: E1216 03:26:56.489014 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767dd84f9b-gcdgn" podUID="9487d5f8-7cdd-4f1e-a411-61521d9e1c14" Dec 16 03:26:56.811000 audit: BPF prog-id=183 op=LOAD Dec 16 03:26:56.811000 audit[4080]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe38627450 a2=98 a3=1fffffffffffffff items=0 ppid=3998 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.811000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:26:56.811000 audit: BPF prog-id=183 op=UNLOAD Dec 16 03:26:56.811000 audit[4080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe38627420 a3=0 items=0 ppid=3998 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.811000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:26:56.811000 audit: BPF prog-id=184 op=LOAD Dec 16 03:26:56.811000 audit[4080]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe38627330 a2=94 a3=3 items=0 ppid=3998 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.811000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:26:56.811000 audit: BPF prog-id=184 op=UNLOAD Dec 16 03:26:56.811000 audit[4080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe38627330 a2=94 a3=3 items=0 ppid=3998 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.811000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:26:56.811000 audit: BPF prog-id=185 op=LOAD Dec 16 03:26:56.811000 audit[4080]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe38627370 a2=94 a3=7ffe38627550 items=0 ppid=3998 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.811000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:26:56.811000 audit: BPF prog-id=185 op=UNLOAD Dec 16 03:26:56.811000 audit[4080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe38627370 a2=94 a3=7ffe38627550 items=0 ppid=3998 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.811000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:26:56.816000 audit: BPF prog-id=186 op=LOAD Dec 16 03:26:56.816000 audit[4081]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefe4fa1a0 a2=98 a3=3 items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.816000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:56.816000 audit: BPF prog-id=186 op=UNLOAD Dec 16 03:26:56.816000 audit[4081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffefe4fa170 a3=0 items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.816000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:56.817000 audit: BPF prog-id=187 op=LOAD Dec 16 03:26:56.817000 audit[4081]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffefe4f9f90 a2=94 a3=54428f items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:56.817000 audit: BPF prog-id=187 op=UNLOAD Dec 16 03:26:56.817000 audit[4081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffefe4f9f90 a2=94 a3=54428f items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:56.817000 audit: BPF prog-id=188 op=LOAD Dec 16 03:26:56.817000 audit[4081]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffefe4f9fc0 a2=94 a3=2 items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:56.817000 audit: BPF prog-id=188 op=UNLOAD Dec 16 03:26:56.817000 audit[4081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffefe4f9fc0 a2=0 a3=2 items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:56.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:57.069000 audit: BPF prog-id=189 op=LOAD Dec 16 03:26:57.069000 audit[4081]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffefe4f9e80 a2=94 a3=1 items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.069000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:57.069000 audit: BPF prog-id=189 op=UNLOAD Dec 16 03:26:57.069000 audit[4081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffefe4f9e80 a2=94 a3=1 items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.069000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:57.085000 audit: BPF prog-id=190 op=LOAD Dec 16 03:26:57.085000 audit[4081]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffefe4f9e70 a2=94 a3=4 items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.085000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:57.085000 audit: BPF prog-id=190 op=UNLOAD Dec 16 03:26:57.085000 audit[4081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffefe4f9e70 a2=0 a3=4 items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.085000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:57.085000 audit: BPF prog-id=191 op=LOAD Dec 16 03:26:57.085000 audit[4081]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffefe4f9cd0 a2=94 a3=5 items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.085000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:57.085000 audit: BPF prog-id=191 op=UNLOAD Dec 16 03:26:57.085000 audit[4081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffefe4f9cd0 a2=0 a3=5 items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.085000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:57.085000 audit: BPF prog-id=192 op=LOAD Dec 16 03:26:57.085000 audit[4081]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffefe4f9ef0 a2=94 a3=6 items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.085000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:57.085000 audit: BPF prog-id=192 op=UNLOAD Dec 16 03:26:57.085000 audit[4081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffefe4f9ef0 a2=0 a3=6 items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.085000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:57.086000 audit: BPF prog-id=193 op=LOAD Dec 16 03:26:57.086000 audit[4081]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffefe4f96a0 a2=94 a3=88 items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.086000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:57.086000 audit: BPF prog-id=194 op=LOAD Dec 16 03:26:57.086000 audit[4081]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffefe4f9520 a2=94 a3=2 items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.086000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:57.086000 audit: BPF prog-id=194 op=UNLOAD Dec 16 03:26:57.086000 audit[4081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffefe4f9550 a2=0 a3=7ffefe4f9650 items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.086000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:57.087000 audit: BPF prog-id=193 op=UNLOAD Dec 16 03:26:57.087000 audit[4081]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3e28dd10 a2=0 a3=b18fc1ca4b89042e items=0 ppid=3998 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.087000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:26:57.100000 audit: BPF prog-id=195 op=LOAD Dec 16 03:26:57.100000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff7bec13b0 a2=98 a3=1999999999999999 items=0 ppid=3998 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.100000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:26:57.100000 audit: BPF prog-id=195 op=UNLOAD Dec 16 03:26:57.100000 audit[4104]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff7bec1380 a3=0 items=0 ppid=3998 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.100000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:26:57.100000 audit: BPF prog-id=196 op=LOAD Dec 16 03:26:57.100000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff7bec1290 a2=94 a3=ffff items=0 ppid=3998 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.100000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:26:57.101000 audit: BPF prog-id=196 op=UNLOAD Dec 16 03:26:57.101000 audit[4104]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff7bec1290 a2=94 a3=ffff items=0 ppid=3998 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.101000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:26:57.101000 audit: BPF prog-id=197 op=LOAD Dec 16 03:26:57.101000 audit[4104]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff7bec12d0 a2=94 a3=7fff7bec14b0 items=0 ppid=3998 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.101000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:26:57.101000 audit: BPF prog-id=197 op=UNLOAD Dec 16 03:26:57.101000 audit[4104]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff7bec12d0 a2=94 a3=7fff7bec14b0 items=0 ppid=3998 pid=4104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.101000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:26:57.232565 systemd-networkd[1501]: vxlan.calico: Link UP Dec 16 03:26:57.232578 systemd-networkd[1501]: vxlan.calico: Gained carrier Dec 16 03:26:57.249000 audit: BPF prog-id=198 op=LOAD Dec 16 03:26:57.249000 audit[4129]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe57c88300 a2=98 a3=0 items=0 ppid=3998 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.249000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:26:57.249000 audit: BPF prog-id=198 op=UNLOAD Dec 16 03:26:57.249000 audit[4129]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe57c882d0 a3=0 items=0 ppid=3998 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.249000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:26:57.249000 audit: BPF prog-id=199 op=LOAD Dec 16 03:26:57.249000 audit[4129]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe57c88110 a2=94 a3=54428f items=0 ppid=3998 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.249000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:26:57.249000 audit: BPF prog-id=199 op=UNLOAD Dec 16 03:26:57.249000 audit[4129]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe57c88110 a2=94 a3=54428f items=0 ppid=3998 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.249000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:26:57.249000 audit: BPF prog-id=200 op=LOAD Dec 16 03:26:57.249000 audit[4129]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe57c88140 a2=94 a3=2 items=0 ppid=3998 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.249000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:26:57.249000 audit: BPF prog-id=200 op=UNLOAD Dec 16 03:26:57.249000 audit[4129]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe57c88140 a2=0 a3=2 items=0 ppid=3998 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.249000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:26:57.249000 audit: BPF prog-id=201 op=LOAD Dec 16 03:26:57.249000 audit[4129]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe57c87ef0 a2=94 a3=4 items=0 ppid=3998 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.249000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:26:57.249000 audit: BPF prog-id=201 op=UNLOAD Dec 16 03:26:57.249000 audit[4129]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe57c87ef0 a2=94 a3=4 items=0 ppid=3998 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.249000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:26:57.249000 audit: BPF prog-id=202 op=LOAD Dec 16 03:26:57.249000 audit[4129]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe57c87ff0 a2=94 a3=7ffe57c88170 items=0 ppid=3998 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.249000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:26:57.249000 audit: BPF prog-id=202 op=UNLOAD Dec 16 03:26:57.249000 audit[4129]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe57c87ff0 a2=0 a3=7ffe57c88170 items=0 ppid=3998 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.249000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:26:57.250000 audit: BPF prog-id=203 op=LOAD Dec 16 03:26:57.250000 audit[4129]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe57c87720 a2=94 a3=2 items=0 ppid=3998 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.250000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:26:57.250000 audit: BPF prog-id=203 op=UNLOAD Dec 16 03:26:57.250000 audit[4129]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe57c87720 a2=0 a3=2 items=0 ppid=3998 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.250000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:26:57.250000 audit: BPF prog-id=204 op=LOAD Dec 16 03:26:57.250000 audit[4129]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe57c87820 a2=94 a3=30 items=0 ppid=3998 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.250000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:26:57.259000 audit: BPF prog-id=205 op=LOAD Dec 16 03:26:57.259000 audit[4132]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd1498b50 a2=98 a3=0 items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.259000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.259000 audit: BPF prog-id=205 op=UNLOAD Dec 16 03:26:57.259000 audit[4132]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffd1498b20 a3=0 items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.259000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.259000 audit: BPF prog-id=206 op=LOAD Dec 16 03:26:57.259000 audit[4132]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd1498940 a2=94 a3=54428f items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.259000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.260000 audit: BPF prog-id=206 op=UNLOAD Dec 16 03:26:57.260000 audit[4132]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffd1498940 a2=94 a3=54428f items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.260000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.260000 audit: BPF prog-id=207 op=LOAD Dec 16 03:26:57.260000 audit[4132]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd1498970 a2=94 a3=2 items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.260000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.260000 audit: BPF prog-id=207 op=UNLOAD Dec 16 03:26:57.260000 audit[4132]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffd1498970 a2=0 a3=2 items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.260000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.330571 systemd-networkd[1501]: cali8f942d69732: Gained IPv6LL Dec 16 03:26:57.339763 kubelet[2854]: E1216 03:26:57.339404 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767dd84f9b-gcdgn" podUID="9487d5f8-7cdd-4f1e-a411-61521d9e1c14" Dec 16 03:26:57.372000 audit[4138]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:57.372000 audit[4138]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffa6102350 a2=0 a3=7fffa610233c items=0 ppid=2998 pid=4138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.372000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:57.375000 audit[4138]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:26:57.375000 audit[4138]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffa6102350 a2=0 a3=0 items=0 ppid=2998 pid=4138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.375000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:57.507000 audit: BPF prog-id=208 op=LOAD Dec 16 03:26:57.507000 audit[4132]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd1498830 a2=94 a3=1 items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.507000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.507000 audit: BPF prog-id=208 op=UNLOAD Dec 16 03:26:57.507000 audit[4132]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffd1498830 a2=94 a3=1 items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.507000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.522000 audit: BPF prog-id=209 op=LOAD Dec 16 03:26:57.522000 audit[4132]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffd1498820 a2=94 a3=4 items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.522000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.522000 audit: BPF prog-id=209 op=UNLOAD Dec 16 03:26:57.522000 audit[4132]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffd1498820 a2=0 a3=4 items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.522000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.522000 audit: BPF prog-id=210 op=LOAD Dec 16 03:26:57.522000 audit[4132]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffd1498680 a2=94 a3=5 items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.522000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.523000 audit: BPF prog-id=210 op=UNLOAD Dec 16 03:26:57.523000 audit[4132]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffd1498680 a2=0 a3=5 items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.523000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.523000 audit: BPF prog-id=211 op=LOAD Dec 16 03:26:57.523000 audit[4132]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffd14988a0 a2=94 a3=6 items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.523000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.523000 audit: BPF prog-id=211 op=UNLOAD Dec 16 03:26:57.523000 audit[4132]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffd14988a0 a2=0 a3=6 items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.523000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.523000 audit: BPF prog-id=212 op=LOAD Dec 16 03:26:57.523000 audit[4132]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffd1498050 a2=94 a3=88 items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.523000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.524000 audit: BPF prog-id=213 op=LOAD Dec 16 03:26:57.524000 audit[4132]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fffd1497ed0 a2=94 a3=2 items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.524000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.524000 audit: BPF prog-id=213 op=UNLOAD Dec 16 03:26:57.524000 audit[4132]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fffd1497f00 a2=0 a3=7fffd1498000 items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.524000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.524000 audit: BPF prog-id=212 op=UNLOAD Dec 16 03:26:57.524000 audit[4132]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3fff9d10 a2=0 a3=4b0028d38f93419f items=0 ppid=3998 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.524000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:26:57.533000 audit: BPF prog-id=204 op=UNLOAD Dec 16 03:26:57.533000 audit[3998]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00099e380 a2=0 a3=0 items=0 ppid=3986 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.533000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 03:26:57.608000 audit[4162]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4162 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:26:57.608000 audit[4162]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffeac2e8290 a2=0 a3=7ffeac2e827c items=0 ppid=3998 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.608000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:26:57.624000 audit[4165]: NETFILTER_CFG table=mangle:124 family=2 entries=16 op=nft_register_chain pid=4165 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:26:57.624000 audit[4165]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffff95ba1a0 a2=0 a3=7ffff95ba18c items=0 ppid=3998 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.624000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:26:57.628000 audit[4160]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4160 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:26:57.628000 audit[4160]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc5c936770 a2=0 a3=7ffc5c93675c items=0 ppid=3998 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.628000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:26:57.634000 audit[4163]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4163 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:26:57.634000 audit[4163]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffc16887880 a2=0 a3=7ffc1688786c items=0 ppid=3998 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:57.634000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:26:58.673232 systemd-networkd[1501]: vxlan.calico: Gained IPv6LL Dec 16 03:26:58.983010 kubelet[2854]: I1216 03:26:58.982671 2854 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:27:00.076223 containerd[1591]: time="2025-12-16T03:27:00.076079914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5db2f,Uid:ce64fe8f-be7b-4c39-b3cb-c63b985b6c99,Namespace:kube-system,Attempt:0,}" Dec 16 03:27:00.077028 containerd[1591]: time="2025-12-16T03:27:00.076859402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c6dbfbf4-h7k9m,Uid:f116a091-4f95-4334-821a-705964657507,Namespace:calico-system,Attempt:0,}" Dec 16 03:27:00.319832 systemd-networkd[1501]: cali70f7a5a9b1e: Link UP Dec 16 03:27:00.321941 systemd-networkd[1501]: cali70f7a5a9b1e: Gained carrier Dec 16 03:27:00.357039 containerd[1591]: 2025-12-16 03:27:00.179 [INFO][4225] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--kube--controllers--54c6dbfbf4--h7k9m-eth0 calico-kube-controllers-54c6dbfbf4- calico-system f116a091-4f95-4334-821a-705964657507 820 0 2025-12-16 03:26:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54c6dbfbf4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal calico-kube-controllers-54c6dbfbf4-h7k9m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali70f7a5a9b1e [] [] }} ContainerID="d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" Namespace="calico-system" Pod="calico-kube-controllers-54c6dbfbf4-h7k9m" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--kube--controllers--54c6dbfbf4--h7k9m-" Dec 16 03:27:00.357039 containerd[1591]: 2025-12-16 03:27:00.179 [INFO][4225] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" Namespace="calico-system" Pod="calico-kube-controllers-54c6dbfbf4-h7k9m" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--kube--controllers--54c6dbfbf4--h7k9m-eth0" Dec 16 03:27:00.357039 containerd[1591]: 2025-12-16 03:27:00.261 [INFO][4253] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" HandleID="k8s-pod-network.d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--kube--controllers--54c6dbfbf4--h7k9m-eth0" Dec 16 03:27:00.357636 containerd[1591]: 2025-12-16 03:27:00.262 [INFO][4253] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" HandleID="k8s-pod-network.d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--kube--controllers--54c6dbfbf4--h7k9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f900), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", "pod":"calico-kube-controllers-54c6dbfbf4-h7k9m", "timestamp":"2025-12-16 03:27:00.261083961 +0000 UTC"}, Hostname:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:27:00.357636 containerd[1591]: 2025-12-16 03:27:00.262 [INFO][4253] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:27:00.357636 containerd[1591]: 2025-12-16 03:27:00.262 [INFO][4253] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:27:00.357636 containerd[1591]: 2025-12-16 03:27:00.262 [INFO][4253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal' Dec 16 03:27:00.357636 containerd[1591]: 2025-12-16 03:27:00.276 [INFO][4253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.357636 containerd[1591]: 2025-12-16 03:27:00.281 [INFO][4253] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.357636 containerd[1591]: 2025-12-16 03:27:00.288 [INFO][4253] ipam/ipam.go 511: Trying affinity for 192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.357636 containerd[1591]: 2025-12-16 03:27:00.291 [INFO][4253] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.358516 containerd[1591]: 2025-12-16 03:27:00.295 [INFO][4253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.358516 containerd[1591]: 2025-12-16 03:27:00.295 [INFO][4253] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.128/26 handle="k8s-pod-network.d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.358516 containerd[1591]: 2025-12-16 03:27:00.296 [INFO][4253] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b Dec 16 03:27:00.358516 containerd[1591]: 2025-12-16 03:27:00.302 [INFO][4253] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.128/26 handle="k8s-pod-network.d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.358516 containerd[1591]: 2025-12-16 03:27:00.310 [INFO][4253] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.130/26] block=192.168.56.128/26 handle="k8s-pod-network.d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.358516 containerd[1591]: 2025-12-16 03:27:00.310 [INFO][4253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.130/26] handle="k8s-pod-network.d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.358516 containerd[1591]: 2025-12-16 03:27:00.310 [INFO][4253] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:27:00.358516 containerd[1591]: 2025-12-16 03:27:00.310 [INFO][4253] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.130/26] IPv6=[] ContainerID="d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" HandleID="k8s-pod-network.d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--kube--controllers--54c6dbfbf4--h7k9m-eth0" Dec 16 03:27:00.359399 containerd[1591]: 2025-12-16 03:27:00.316 [INFO][4225] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" Namespace="calico-system" Pod="calico-kube-controllers-54c6dbfbf4-h7k9m" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--kube--controllers--54c6dbfbf4--h7k9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--kube--controllers--54c6dbfbf4--h7k9m-eth0", GenerateName:"calico-kube-controllers-54c6dbfbf4-", Namespace:"calico-system", SelfLink:"", UID:"f116a091-4f95-4334-821a-705964657507", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54c6dbfbf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-54c6dbfbf4-h7k9m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali70f7a5a9b1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:27:00.359694 containerd[1591]: 2025-12-16 03:27:00.316 [INFO][4225] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.130/32] ContainerID="d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" Namespace="calico-system" Pod="calico-kube-controllers-54c6dbfbf4-h7k9m" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--kube--controllers--54c6dbfbf4--h7k9m-eth0" Dec 16 03:27:00.359694 containerd[1591]: 2025-12-16 03:27:00.316 [INFO][4225] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70f7a5a9b1e ContainerID="d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" Namespace="calico-system" Pod="calico-kube-controllers-54c6dbfbf4-h7k9m" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--kube--controllers--54c6dbfbf4--h7k9m-eth0" Dec 16 03:27:00.359694 containerd[1591]: 2025-12-16 03:27:00.321 [INFO][4225] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" Namespace="calico-system" Pod="calico-kube-controllers-54c6dbfbf4-h7k9m" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--kube--controllers--54c6dbfbf4--h7k9m-eth0" Dec 16 03:27:00.360305 containerd[1591]: 2025-12-16 03:27:00.323 [INFO][4225] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" Namespace="calico-system" Pod="calico-kube-controllers-54c6dbfbf4-h7k9m" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--kube--controllers--54c6dbfbf4--h7k9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--kube--controllers--54c6dbfbf4--h7k9m-eth0", GenerateName:"calico-kube-controllers-54c6dbfbf4-", Namespace:"calico-system", SelfLink:"", UID:"f116a091-4f95-4334-821a-705964657507", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54c6dbfbf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b", Pod:"calico-kube-controllers-54c6dbfbf4-h7k9m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.56.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali70f7a5a9b1e", MAC:"fe:53:61:a5:76:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:27:00.360305 containerd[1591]: 2025-12-16 03:27:00.346 [INFO][4225] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" Namespace="calico-system" Pod="calico-kube-controllers-54c6dbfbf4-h7k9m" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--kube--controllers--54c6dbfbf4--h7k9m-eth0" Dec 16 03:27:00.425934 containerd[1591]: time="2025-12-16T03:27:00.425689166Z" level=info msg="connecting to shim d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b" address="unix:///run/containerd/s/bee0846b303a6d6aec3a73078b44ba33a6259bd0ee9caa83040fcac4eb06edc4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:27:00.434000 audit[4274]: NETFILTER_CFG table=filter:127 family=2 entries=36 op=nft_register_chain pid=4274 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:27:00.434000 audit[4274]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffd9e576340 a2=0 a3=7ffd9e57632c items=0 ppid=3998 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.434000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:27:00.475499 systemd-networkd[1501]: cali1a79d147d88: Link UP Dec 16 03:27:00.479003 systemd-networkd[1501]: cali1a79d147d88: Gained carrier Dec 16 03:27:00.510191 systemd[1]: Started cri-containerd-d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b.scope - libcontainer container d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b. Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.178 [INFO][4227] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--5db2f-eth0 coredns-674b8bbfcf- kube-system ce64fe8f-be7b-4c39-b3cb-c63b985b6c99 821 0 2025-12-16 03:26:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal coredns-674b8bbfcf-5db2f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1a79d147d88 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" Namespace="kube-system" Pod="coredns-674b8bbfcf-5db2f" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--5db2f-" Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.178 [INFO][4227] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" Namespace="kube-system" Pod="coredns-674b8bbfcf-5db2f" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--5db2f-eth0" Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.262 [INFO][4248] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" HandleID="k8s-pod-network.ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--5db2f-eth0" Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.263 [INFO][4248] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" HandleID="k8s-pod-network.ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--5db2f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ecb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", "pod":"coredns-674b8bbfcf-5db2f", "timestamp":"2025-12-16 03:27:00.262815649 +0000 UTC"}, Hostname:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.263 [INFO][4248] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.310 [INFO][4248] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.311 [INFO][4248] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal' Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.377 [INFO][4248] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.389 [INFO][4248] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.405 [INFO][4248] ipam/ipam.go 511: Trying affinity for 192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.412 [INFO][4248] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.421 [INFO][4248] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.423 [INFO][4248] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.128/26 handle="k8s-pod-network.ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.428 [INFO][4248] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373 Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.437 [INFO][4248] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.128/26 handle="k8s-pod-network.ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.454 [INFO][4248] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.131/26] block=192.168.56.128/26 handle="k8s-pod-network.ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.454 [INFO][4248] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.131/26] handle="k8s-pod-network.ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.454 [INFO][4248] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:27:00.528338 containerd[1591]: 2025-12-16 03:27:00.454 [INFO][4248] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.131/26] IPv6=[] ContainerID="ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" HandleID="k8s-pod-network.ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--5db2f-eth0" Dec 16 03:27:00.530870 containerd[1591]: 2025-12-16 03:27:00.464 [INFO][4227] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" Namespace="kube-system" Pod="coredns-674b8bbfcf-5db2f" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--5db2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--5db2f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ce64fe8f-be7b-4c39-b3cb-c63b985b6c99", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-674b8bbfcf-5db2f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a79d147d88", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:27:00.530870 containerd[1591]: 2025-12-16 03:27:00.465 [INFO][4227] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.131/32] ContainerID="ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" Namespace="kube-system" Pod="coredns-674b8bbfcf-5db2f" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--5db2f-eth0" Dec 16 03:27:00.530870 containerd[1591]: 2025-12-16 03:27:00.465 [INFO][4227] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a79d147d88 ContainerID="ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" Namespace="kube-system" Pod="coredns-674b8bbfcf-5db2f" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--5db2f-eth0" Dec 16 03:27:00.530870 containerd[1591]: 2025-12-16 03:27:00.494 [INFO][4227] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" Namespace="kube-system" Pod="coredns-674b8bbfcf-5db2f" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--5db2f-eth0" Dec 16 03:27:00.530870 containerd[1591]: 2025-12-16 03:27:00.499 [INFO][4227] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" Namespace="kube-system" Pod="coredns-674b8bbfcf-5db2f" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--5db2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--5db2f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ce64fe8f-be7b-4c39-b3cb-c63b985b6c99", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373", Pod:"coredns-674b8bbfcf-5db2f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a79d147d88", MAC:"5e:58:39:cd:d8:8e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:27:00.530870 containerd[1591]: 2025-12-16 03:27:00.517 [INFO][4227] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" Namespace="kube-system" Pod="coredns-674b8bbfcf-5db2f" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--5db2f-eth0" Dec 16 03:27:00.545000 audit: BPF prog-id=214 op=LOAD Dec 16 03:27:00.548000 audit: BPF prog-id=215 op=LOAD Dec 16 03:27:00.548000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4280 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438376666393238313233663430336562363232373834316166343931 Dec 16 03:27:00.548000 audit: BPF prog-id=215 op=UNLOAD Dec 16 03:27:00.548000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4280 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438376666393238313233663430336562363232373834316166343931 Dec 16 03:27:00.549000 audit: BPF prog-id=216 op=LOAD Dec 16 03:27:00.549000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4280 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438376666393238313233663430336562363232373834316166343931 Dec 16 03:27:00.549000 audit: BPF prog-id=217 op=LOAD Dec 16 03:27:00.549000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4280 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438376666393238313233663430336562363232373834316166343931 Dec 16 03:27:00.549000 audit: BPF prog-id=217 op=UNLOAD Dec 16 03:27:00.549000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4280 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438376666393238313233663430336562363232373834316166343931 Dec 16 03:27:00.549000 audit: BPF prog-id=216 op=UNLOAD Dec 16 03:27:00.549000 audit[4293]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4280 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438376666393238313233663430336562363232373834316166343931 Dec 16 03:27:00.549000 audit: BPF prog-id=218 op=LOAD Dec 16 03:27:00.549000 audit[4293]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4280 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438376666393238313233663430336562363232373834316166343931 Dec 16 03:27:00.599034 containerd[1591]: time="2025-12-16T03:27:00.598972764Z" level=info msg="connecting to shim ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373" address="unix:///run/containerd/s/930ac3a2c045ffbae2b087cbcd3f26603080ea4759075d679f430e0e74c7a65a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:27:00.603000 audit[4324]: NETFILTER_CFG table=filter:128 family=2 entries=46 op=nft_register_chain pid=4324 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:27:00.603000 audit[4324]: SYSCALL arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7fff20ea1580 a2=0 a3=7fff20ea156c items=0 ppid=3998 pid=4324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.603000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:27:00.650857 containerd[1591]: time="2025-12-16T03:27:00.650611405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c6dbfbf4-h7k9m,Uid:f116a091-4f95-4334-821a-705964657507,Namespace:calico-system,Attempt:0,} returns sandbox id \"d87ff928123f403eb6227841af49163763000f939053167b852ad59028300e4b\"" Dec 16 03:27:00.655484 containerd[1591]: time="2025-12-16T03:27:00.655446321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:27:00.669156 systemd[1]: Started cri-containerd-ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373.scope - libcontainer container ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373. Dec 16 03:27:00.685000 audit: BPF prog-id=219 op=LOAD Dec 16 03:27:00.686000 audit: BPF prog-id=220 op=LOAD Dec 16 03:27:00.686000 audit[4342]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4330 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365363530616466353237316566346631383930623036396436393038 Dec 16 03:27:00.686000 audit: BPF prog-id=220 op=UNLOAD Dec 16 03:27:00.686000 audit[4342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4330 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365363530616466353237316566346631383930623036396436393038 Dec 16 03:27:00.687000 audit: BPF prog-id=221 op=LOAD Dec 16 03:27:00.687000 audit[4342]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4330 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365363530616466353237316566346631383930623036396436393038 Dec 16 03:27:00.687000 audit: BPF prog-id=222 op=LOAD Dec 16 03:27:00.687000 audit[4342]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4330 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365363530616466353237316566346631383930623036396436393038 Dec 16 03:27:00.687000 audit: BPF prog-id=222 op=UNLOAD Dec 16 03:27:00.687000 audit[4342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4330 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365363530616466353237316566346631383930623036396436393038 Dec 16 03:27:00.687000 audit: BPF prog-id=221 op=UNLOAD Dec 16 03:27:00.687000 audit[4342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4330 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365363530616466353237316566346631383930623036396436393038 Dec 16 03:27:00.688000 audit: BPF prog-id=223 op=LOAD Dec 16 03:27:00.688000 audit[4342]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4330 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365363530616466353237316566346631383930623036396436393038 Dec 16 03:27:00.741723 containerd[1591]: time="2025-12-16T03:27:00.741676798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5db2f,Uid:ce64fe8f-be7b-4c39-b3cb-c63b985b6c99,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373\"" Dec 16 03:27:00.749581 containerd[1591]: time="2025-12-16T03:27:00.749541287Z" level=info msg="CreateContainer within sandbox \"ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:27:00.761583 containerd[1591]: time="2025-12-16T03:27:00.761163238Z" level=info msg="Container 23e996b892803ececf57c6127783930e96895b5dd9140ce2b09a0be7bdd73a1a: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:27:00.773652 containerd[1591]: time="2025-12-16T03:27:00.773578389Z" level=info msg="CreateContainer within sandbox \"ce650adf5271ef4f1890b069d69089ec5ade236522dda6fb0655db527c984373\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23e996b892803ececf57c6127783930e96895b5dd9140ce2b09a0be7bdd73a1a\"" Dec 16 03:27:00.774534 containerd[1591]: time="2025-12-16T03:27:00.774470072Z" level=info msg="StartContainer for \"23e996b892803ececf57c6127783930e96895b5dd9140ce2b09a0be7bdd73a1a\"" Dec 16 03:27:00.777149 containerd[1591]: time="2025-12-16T03:27:00.777102796Z" level=info msg="connecting to shim 23e996b892803ececf57c6127783930e96895b5dd9140ce2b09a0be7bdd73a1a" address="unix:///run/containerd/s/930ac3a2c045ffbae2b087cbcd3f26603080ea4759075d679f430e0e74c7a65a" protocol=ttrpc version=3 Dec 16 03:27:00.803324 systemd[1]: Started cri-containerd-23e996b892803ececf57c6127783930e96895b5dd9140ce2b09a0be7bdd73a1a.scope - libcontainer container 23e996b892803ececf57c6127783930e96895b5dd9140ce2b09a0be7bdd73a1a. Dec 16 03:27:00.820000 audit: BPF prog-id=224 op=LOAD Dec 16 03:27:00.821000 audit: BPF prog-id=225 op=LOAD Dec 16 03:27:00.821000 audit[4376]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4330 pid=4376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233653939366238393238303365636563663537633631323737383339 Dec 16 03:27:00.821000 audit: BPF prog-id=225 op=UNLOAD Dec 16 03:27:00.821000 audit[4376]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4330 pid=4376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233653939366238393238303365636563663537633631323737383339 Dec 16 03:27:00.821000 audit: BPF prog-id=226 op=LOAD Dec 16 03:27:00.821000 audit[4376]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4330 pid=4376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233653939366238393238303365636563663537633631323737383339 Dec 16 03:27:00.821000 audit: BPF prog-id=227 op=LOAD Dec 16 03:27:00.821000 audit[4376]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4330 pid=4376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233653939366238393238303365636563663537633631323737383339 Dec 16 03:27:00.821000 audit: BPF prog-id=227 op=UNLOAD Dec 16 03:27:00.821000 audit[4376]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4330 pid=4376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233653939366238393238303365636563663537633631323737383339 Dec 16 03:27:00.821000 audit: BPF prog-id=226 op=UNLOAD Dec 16 03:27:00.821000 audit[4376]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4330 pid=4376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233653939366238393238303365636563663537633631323737383339 Dec 16 03:27:00.821000 audit: BPF prog-id=228 op=LOAD Dec 16 03:27:00.821000 audit[4376]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4330 pid=4376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:00.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233653939366238393238303365636563663537633631323737383339 Dec 16 03:27:00.830867 containerd[1591]: time="2025-12-16T03:27:00.830459325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:00.831942 containerd[1591]: time="2025-12-16T03:27:00.831857778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:27:00.832382 containerd[1591]: time="2025-12-16T03:27:00.832073890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:00.833167 kubelet[2854]: E1216 03:27:00.833125 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:27:00.834571 kubelet[2854]: E1216 03:27:00.833185 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:27:00.834571 kubelet[2854]: E1216 03:27:00.833374 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl6xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54c6dbfbf4-h7k9m_calico-system(f116a091-4f95-4334-821a-705964657507): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:00.834571 kubelet[2854]: E1216 03:27:00.834539 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c6dbfbf4-h7k9m" podUID="f116a091-4f95-4334-821a-705964657507" Dec 16 03:27:00.858284 containerd[1591]: time="2025-12-16T03:27:00.858231685Z" level=info msg="StartContainer for \"23e996b892803ececf57c6127783930e96895b5dd9140ce2b09a0be7bdd73a1a\" returns successfully" Dec 16 03:27:01.076504 containerd[1591]: time="2025-12-16T03:27:01.076444726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2xvkp,Uid:964e7c44-1e18-4e5b-8b6a-1130081b8647,Namespace:calico-system,Attempt:0,}" Dec 16 03:27:01.280113 systemd-networkd[1501]: cali5694dce1445: Link UP Dec 16 03:27:01.282579 systemd-networkd[1501]: cali5694dce1445: Gained carrier Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.162 [INFO][4410] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-goldmane--666569f655--2xvkp-eth0 goldmane-666569f655- calico-system 964e7c44-1e18-4e5b-8b6a-1130081b8647 824 0 2025-12-16 03:26:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal goldmane-666569f655-2xvkp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5694dce1445 [] [] }} ContainerID="fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" Namespace="calico-system" Pod="goldmane-666569f655-2xvkp" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-goldmane--666569f655--2xvkp-" Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.162 [INFO][4410] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" Namespace="calico-system" Pod="goldmane-666569f655-2xvkp" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-goldmane--666569f655--2xvkp-eth0" Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.220 [INFO][4422] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" HandleID="k8s-pod-network.fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-goldmane--666569f655--2xvkp-eth0" Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.220 [INFO][4422] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" HandleID="k8s-pod-network.fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-goldmane--666569f655--2xvkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cefe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", "pod":"goldmane-666569f655-2xvkp", "timestamp":"2025-12-16 03:27:01.220065369 +0000 UTC"}, Hostname:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.221 [INFO][4422] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.221 [INFO][4422] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.221 [INFO][4422] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal' Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.234 [INFO][4422] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.239 [INFO][4422] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.247 [INFO][4422] ipam/ipam.go 511: Trying affinity for 192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.249 [INFO][4422] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.252 [INFO][4422] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.252 [INFO][4422] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.128/26 handle="k8s-pod-network.fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.254 [INFO][4422] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1 Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.262 [INFO][4422] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.128/26 handle="k8s-pod-network.fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.270 [INFO][4422] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.132/26] block=192.168.56.128/26 handle="k8s-pod-network.fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.270 [INFO][4422] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.132/26] handle="k8s-pod-network.fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.270 [INFO][4422] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:27:01.309869 containerd[1591]: 2025-12-16 03:27:01.271 [INFO][4422] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.132/26] IPv6=[] ContainerID="fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" HandleID="k8s-pod-network.fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-goldmane--666569f655--2xvkp-eth0" Dec 16 03:27:01.314489 containerd[1591]: 2025-12-16 03:27:01.274 [INFO][4410] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" Namespace="calico-system" Pod="goldmane-666569f655-2xvkp" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-goldmane--666569f655--2xvkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-goldmane--666569f655--2xvkp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"964e7c44-1e18-4e5b-8b6a-1130081b8647", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-666569f655-2xvkp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.56.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5694dce1445", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:27:01.314489 containerd[1591]: 2025-12-16 03:27:01.274 [INFO][4410] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.132/32] ContainerID="fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" Namespace="calico-system" Pod="goldmane-666569f655-2xvkp" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-goldmane--666569f655--2xvkp-eth0" Dec 16 03:27:01.314489 containerd[1591]: 2025-12-16 03:27:01.274 [INFO][4410] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5694dce1445 ContainerID="fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" Namespace="calico-system" Pod="goldmane-666569f655-2xvkp" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-goldmane--666569f655--2xvkp-eth0" Dec 16 03:27:01.314489 containerd[1591]: 2025-12-16 03:27:01.280 [INFO][4410] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" Namespace="calico-system" Pod="goldmane-666569f655-2xvkp" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-goldmane--666569f655--2xvkp-eth0" Dec 16 03:27:01.314489 containerd[1591]: 2025-12-16 03:27:01.281 [INFO][4410] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" Namespace="calico-system" Pod="goldmane-666569f655-2xvkp" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-goldmane--666569f655--2xvkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-goldmane--666569f655--2xvkp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"964e7c44-1e18-4e5b-8b6a-1130081b8647", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1", Pod:"goldmane-666569f655-2xvkp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.56.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5694dce1445", MAC:"ba:34:8c:de:6b:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:27:01.314489 containerd[1591]: 2025-12-16 03:27:01.296 [INFO][4410] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" Namespace="calico-system" Pod="goldmane-666569f655-2xvkp" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-goldmane--666569f655--2xvkp-eth0" Dec 16 03:27:01.372867 kubelet[2854]: E1216 03:27:01.372528 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c6dbfbf4-h7k9m" podUID="f116a091-4f95-4334-821a-705964657507" Dec 16 03:27:01.381167 containerd[1591]: time="2025-12-16T03:27:01.380097307Z" level=info msg="connecting to shim fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1" address="unix:///run/containerd/s/a8c6304c5d36d2799b313c1a7e7ed37bf1f7afcf81dc367c83aeaece289300fd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:27:01.407951 kernel: kauditd_printk_skb: 288 callbacks suppressed Dec 16 03:27:01.408095 kernel: audit: type=1325 audit(1765855621.383:667): table=filter:129 family=2 entries=52 op=nft_register_chain pid=4445 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:27:01.383000 audit[4445]: NETFILTER_CFG table=filter:129 family=2 entries=52 op=nft_register_chain pid=4445 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:27:01.466412 kernel: audit: type=1300 audit(1765855621.383:667): arch=c000003e syscall=46 success=yes exit=27556 a0=3 a1=7ffcf1b46fe0 a2=0 a3=7ffcf1b46fcc items=0 ppid=3998 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:01.383000 audit[4445]: SYSCALL arch=c000003e syscall=46 success=yes exit=27556 a0=3 a1=7ffcf1b46fe0 a2=0 a3=7ffcf1b46fcc items=0 ppid=3998 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:01.383000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:27:01.488950 kernel: audit: type=1327 audit(1765855621.383:667): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:27:01.489706 systemd-networkd[1501]: cali70f7a5a9b1e: Gained IPv6LL Dec 16 03:27:01.516190 kubelet[2854]: I1216 03:27:01.516122 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5db2f" podStartSLOduration=43.516099799 podStartE2EDuration="43.516099799s" podCreationTimestamp="2025-12-16 03:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:27:01.465029425 +0000 UTC m=+47.655230284" watchObservedRunningTime="2025-12-16 03:27:01.516099799 +0000 UTC m=+47.706300663" Dec 16 03:27:01.543971 kernel: audit: type=1325 audit(1765855621.525:668): table=filter:130 family=2 entries=20 op=nft_register_rule pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:01.525000 audit[4471]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:01.543645 systemd[1]: Started cri-containerd-fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1.scope - libcontainer container fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1. Dec 16 03:27:01.589080 kernel: audit: type=1300 audit(1765855621.525:668): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeae50d510 a2=0 a3=7ffeae50d4fc items=0 ppid=2998 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:01.525000 audit[4471]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeae50d510 a2=0 a3=7ffeae50d4fc items=0 ppid=2998 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:01.605070 kernel: audit: type=1327 audit(1765855621.525:668): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:01.525000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:01.549000 audit[4471]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:01.626931 kernel: audit: type=1325 audit(1765855621.549:669): table=nat:131 family=2 entries=14 op=nft_register_rule pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:01.549000 audit[4471]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeae50d510 a2=0 a3=0 items=0 ppid=2998 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:01.549000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:01.679069 kernel: audit: type=1300 audit(1765855621.549:669): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeae50d510 a2=0 a3=0 items=0 ppid=2998 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:01.679116 kernel: audit: type=1327 audit(1765855621.549:669): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:01.686374 kernel: audit: type=1334 audit(1765855621.566:670): prog-id=229 op=LOAD Dec 16 03:27:01.566000 audit: BPF prog-id=229 op=LOAD Dec 16 03:27:01.566000 audit: BPF prog-id=230 op=LOAD Dec 16 03:27:01.566000 audit[4458]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4446 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:01.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313831383638376561346235643339393038333232616537316336 Dec 16 03:27:01.566000 audit: BPF prog-id=230 op=UNLOAD Dec 16 03:27:01.566000 audit[4458]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4446 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:01.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313831383638376561346235643339393038333232616537316336 Dec 16 03:27:01.566000 audit: BPF prog-id=231 op=LOAD Dec 16 03:27:01.566000 audit[4458]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4446 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:01.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313831383638376561346235643339393038333232616537316336 Dec 16 03:27:01.566000 audit: BPF prog-id=232 op=LOAD Dec 16 03:27:01.566000 audit[4458]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4446 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:01.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313831383638376561346235643339393038333232616537316336 Dec 16 03:27:01.566000 audit: BPF prog-id=232 op=UNLOAD Dec 16 03:27:01.566000 audit[4458]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4446 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:01.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313831383638376561346235643339393038333232616537316336 Dec 16 03:27:01.566000 audit: BPF prog-id=231 op=UNLOAD Dec 16 03:27:01.566000 audit[4458]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4446 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:01.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313831383638376561346235643339393038333232616537316336 Dec 16 03:27:01.566000 audit: BPF prog-id=233 op=LOAD Dec 16 03:27:01.566000 audit[4458]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4446 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:01.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664313831383638376561346235643339393038333232616537316336 Dec 16 03:27:01.721000 audit[4480]: NETFILTER_CFG table=filter:132 family=2 entries=17 op=nft_register_rule pid=4480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:01.721000 audit[4480]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeb6e87ee0 a2=0 a3=7ffeb6e87ecc items=0 ppid=2998 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:01.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:01.726000 audit[4480]: NETFILTER_CFG table=nat:133 family=2 entries=35 op=nft_register_chain pid=4480 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:01.726000 audit[4480]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffeb6e87ee0 a2=0 a3=7ffeb6e87ecc items=0 ppid=2998 pid=4480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:01.726000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:01.796273 containerd[1591]: time="2025-12-16T03:27:01.794344160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2xvkp,Uid:964e7c44-1e18-4e5b-8b6a-1130081b8647,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd1818687ea4b5d39908322ae71c692b15f810f47d3390e78daaba2251456bf1\"" Dec 16 03:27:01.800079 containerd[1591]: time="2025-12-16T03:27:01.799665939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:27:01.957876 containerd[1591]: time="2025-12-16T03:27:01.957741180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:01.960059 containerd[1591]: time="2025-12-16T03:27:01.959798360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:27:01.960196 containerd[1591]: time="2025-12-16T03:27:01.959830456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:01.961860 kubelet[2854]: E1216 03:27:01.961743 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:27:01.961860 kubelet[2854]: E1216 03:27:01.961821 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:27:01.963220 kubelet[2854]: E1216 03:27:01.962068 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hm9d8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2xvkp_calico-system(964e7c44-1e18-4e5b-8b6a-1130081b8647): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:01.964049 kubelet[2854]: E1216 03:27:01.963982 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2xvkp" podUID="964e7c44-1e18-4e5b-8b6a-1130081b8647" Dec 16 03:27:02.077741 containerd[1591]: time="2025-12-16T03:27:02.077587949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-md7c7,Uid:2ffe169c-1680-458a-8a3d-3c85e36cea72,Namespace:kube-system,Attempt:0,}" Dec 16 03:27:02.129122 systemd-networkd[1501]: cali1a79d147d88: Gained IPv6LL Dec 16 03:27:02.272656 systemd-networkd[1501]: cali348b61d0188: Link UP Dec 16 03:27:02.273219 systemd-networkd[1501]: cali348b61d0188: Gained carrier Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.159 [INFO][4487] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--md7c7-eth0 coredns-674b8bbfcf- kube-system 2ffe169c-1680-458a-8a3d-3c85e36cea72 818 0 2025-12-16 03:26:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal coredns-674b8bbfcf-md7c7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali348b61d0188 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" Namespace="kube-system" Pod="coredns-674b8bbfcf-md7c7" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--md7c7-" Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.160 [INFO][4487] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" Namespace="kube-system" Pod="coredns-674b8bbfcf-md7c7" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--md7c7-eth0" Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.207 [INFO][4500] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" HandleID="k8s-pod-network.0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--md7c7-eth0" Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.208 [INFO][4500] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" HandleID="k8s-pod-network.0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--md7c7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5670), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", "pod":"coredns-674b8bbfcf-md7c7", "timestamp":"2025-12-16 03:27:02.207831483 +0000 UTC"}, Hostname:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.208 [INFO][4500] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.208 [INFO][4500] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.208 [INFO][4500] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal' Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.220 [INFO][4500] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.227 [INFO][4500] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.234 [INFO][4500] ipam/ipam.go 511: Trying affinity for 192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.237 [INFO][4500] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.240 [INFO][4500] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.240 [INFO][4500] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.128/26 handle="k8s-pod-network.0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.242 [INFO][4500] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07 Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.247 [INFO][4500] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.128/26 handle="k8s-pod-network.0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.258 [INFO][4500] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.133/26] block=192.168.56.128/26 handle="k8s-pod-network.0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.258 [INFO][4500] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.133/26] handle="k8s-pod-network.0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.259 [INFO][4500] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:27:02.302714 containerd[1591]: 2025-12-16 03:27:02.259 [INFO][4500] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.133/26] IPv6=[] ContainerID="0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" HandleID="k8s-pod-network.0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--md7c7-eth0" Dec 16 03:27:02.303769 containerd[1591]: 2025-12-16 03:27:02.262 [INFO][4487] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" Namespace="kube-system" Pod="coredns-674b8bbfcf-md7c7" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--md7c7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--md7c7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2ffe169c-1680-458a-8a3d-3c85e36cea72", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-674b8bbfcf-md7c7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali348b61d0188", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:27:02.303769 containerd[1591]: 2025-12-16 03:27:02.263 [INFO][4487] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.133/32] ContainerID="0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" Namespace="kube-system" Pod="coredns-674b8bbfcf-md7c7" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--md7c7-eth0" Dec 16 03:27:02.303769 containerd[1591]: 2025-12-16 03:27:02.263 [INFO][4487] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali348b61d0188 ContainerID="0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" Namespace="kube-system" Pod="coredns-674b8bbfcf-md7c7" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--md7c7-eth0" Dec 16 03:27:02.303769 containerd[1591]: 2025-12-16 03:27:02.277 [INFO][4487] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" Namespace="kube-system" Pod="coredns-674b8bbfcf-md7c7" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--md7c7-eth0" Dec 16 03:27:02.303769 containerd[1591]: 2025-12-16 03:27:02.278 [INFO][4487] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" Namespace="kube-system" Pod="coredns-674b8bbfcf-md7c7" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--md7c7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--md7c7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2ffe169c-1680-458a-8a3d-3c85e36cea72", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07", Pod:"coredns-674b8bbfcf-md7c7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.56.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali348b61d0188", MAC:"ea:42:ad:5a:3b:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:27:02.303769 containerd[1591]: 2025-12-16 03:27:02.300 [INFO][4487] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" Namespace="kube-system" Pod="coredns-674b8bbfcf-md7c7" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--md7c7-eth0" Dec 16 03:27:02.341214 containerd[1591]: time="2025-12-16T03:27:02.339193628Z" level=info msg="connecting to shim 0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07" address="unix:///run/containerd/s/b4d88a33607b59cf9aeb76547cd2d34b375f98ff19c2dc2d5f8efad37ced6d6b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:27:02.388270 kubelet[2854]: E1216 03:27:02.387978 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2xvkp" podUID="964e7c44-1e18-4e5b-8b6a-1130081b8647" Dec 16 03:27:02.388896 kubelet[2854]: E1216 03:27:02.388818 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c6dbfbf4-h7k9m" podUID="f116a091-4f95-4334-821a-705964657507" Dec 16 03:27:02.438555 systemd[1]: Started cri-containerd-0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07.scope - libcontainer container 0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07. Dec 16 03:27:02.465000 audit[4550]: NETFILTER_CFG table=filter:134 family=2 entries=44 op=nft_register_chain pid=4550 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:27:02.465000 audit[4550]: SYSCALL arch=c000003e syscall=46 success=yes exit=21532 a0=3 a1=7ffc73a523e0 a2=0 a3=7ffc73a523cc items=0 ppid=3998 pid=4550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.465000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:27:02.491000 audit: BPF prog-id=234 op=LOAD Dec 16 03:27:02.492000 audit: BPF prog-id=235 op=LOAD Dec 16 03:27:02.492000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4521 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306330396466396264393261336532636331616262616435396638 Dec 16 03:27:02.492000 audit: BPF prog-id=235 op=UNLOAD Dec 16 03:27:02.492000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4521 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306330396466396264393261336532636331616262616435396638 Dec 16 03:27:02.492000 audit: BPF prog-id=236 op=LOAD Dec 16 03:27:02.492000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4521 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306330396466396264393261336532636331616262616435396638 Dec 16 03:27:02.492000 audit: BPF prog-id=237 op=LOAD Dec 16 03:27:02.492000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4521 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306330396466396264393261336532636331616262616435396638 Dec 16 03:27:02.492000 audit: BPF prog-id=237 op=UNLOAD Dec 16 03:27:02.492000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4521 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306330396466396264393261336532636331616262616435396638 Dec 16 03:27:02.494000 audit: BPF prog-id=236 op=UNLOAD Dec 16 03:27:02.494000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4521 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306330396466396264393261336532636331616262616435396638 Dec 16 03:27:02.494000 audit: BPF prog-id=238 op=LOAD Dec 16 03:27:02.494000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4521 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066306330396466396264393261336532636331616262616435396638 Dec 16 03:27:02.582968 containerd[1591]: time="2025-12-16T03:27:02.582880670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-md7c7,Uid:2ffe169c-1680-458a-8a3d-3c85e36cea72,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07\"" Dec 16 03:27:02.592460 containerd[1591]: time="2025-12-16T03:27:02.592349420Z" level=info msg="CreateContainer within sandbox \"0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:27:02.608945 containerd[1591]: time="2025-12-16T03:27:02.608669821Z" level=info msg="Container 3551d1e2bccf78b7bb31d6b2b7777cfa5c6ec48d0b0affaf605ab4f4ab6a9962: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:27:02.618683 containerd[1591]: time="2025-12-16T03:27:02.618639838Z" level=info msg="CreateContainer within sandbox \"0f0c09df9bd92a3e2cc1abbad59f834c94c69702400851719a27465978bced07\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3551d1e2bccf78b7bb31d6b2b7777cfa5c6ec48d0b0affaf605ab4f4ab6a9962\"" Dec 16 03:27:02.619778 containerd[1591]: time="2025-12-16T03:27:02.619741934Z" level=info msg="StartContainer for \"3551d1e2bccf78b7bb31d6b2b7777cfa5c6ec48d0b0affaf605ab4f4ab6a9962\"" Dec 16 03:27:02.622982 containerd[1591]: time="2025-12-16T03:27:02.622219243Z" level=info msg="connecting to shim 3551d1e2bccf78b7bb31d6b2b7777cfa5c6ec48d0b0affaf605ab4f4ab6a9962" address="unix:///run/containerd/s/b4d88a33607b59cf9aeb76547cd2d34b375f98ff19c2dc2d5f8efad37ced6d6b" protocol=ttrpc version=3 Dec 16 03:27:02.670753 systemd[1]: Started cri-containerd-3551d1e2bccf78b7bb31d6b2b7777cfa5c6ec48d0b0affaf605ab4f4ab6a9962.scope - libcontainer container 3551d1e2bccf78b7bb31d6b2b7777cfa5c6ec48d0b0affaf605ab4f4ab6a9962. Dec 16 03:27:02.735000 audit: BPF prog-id=239 op=LOAD Dec 16 03:27:02.735000 audit: BPF prog-id=240 op=LOAD Dec 16 03:27:02.735000 audit[4564]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4521 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335353164316532626363663738623762623331643662326237373737 Dec 16 03:27:02.735000 audit: BPF prog-id=240 op=UNLOAD Dec 16 03:27:02.735000 audit[4564]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4521 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335353164316532626363663738623762623331643662326237373737 Dec 16 03:27:02.736000 audit: BPF prog-id=241 op=LOAD Dec 16 03:27:02.736000 audit[4564]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4521 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335353164316532626363663738623762623331643662326237373737 Dec 16 03:27:02.736000 audit: BPF prog-id=242 op=LOAD Dec 16 03:27:02.736000 audit[4564]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4521 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335353164316532626363663738623762623331643662326237373737 Dec 16 03:27:02.736000 audit: BPF prog-id=242 op=UNLOAD Dec 16 03:27:02.736000 audit[4564]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4521 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335353164316532626363663738623762623331643662326237373737 Dec 16 03:27:02.736000 audit: BPF prog-id=241 op=UNLOAD Dec 16 03:27:02.736000 audit[4564]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4521 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335353164316532626363663738623762623331643662326237373737 Dec 16 03:27:02.736000 audit: BPF prog-id=243 op=LOAD Dec 16 03:27:02.736000 audit[4564]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4521 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.736000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335353164316532626363663738623762623331643662326237373737 Dec 16 03:27:02.789999 containerd[1591]: time="2025-12-16T03:27:02.789954317Z" level=info msg="StartContainer for \"3551d1e2bccf78b7bb31d6b2b7777cfa5c6ec48d0b0affaf605ab4f4ab6a9962\" returns successfully" Dec 16 03:27:02.884000 audit[4599]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=4599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:02.884000 audit[4599]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe56ee1260 a2=0 a3=7ffe56ee124c items=0 ppid=2998 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.884000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:02.891000 audit[4599]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=4599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:02.891000 audit[4599]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe56ee1260 a2=0 a3=7ffe56ee124c items=0 ppid=2998 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:02.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:03.076055 containerd[1591]: time="2025-12-16T03:27:03.075996652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-984d7f7b9-qw7fg,Uid:58c0a3f8-05f5-4d50-84f5-6c400d162736,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:27:03.076657 containerd[1591]: time="2025-12-16T03:27:03.076601991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-984d7f7b9-k94jh,Uid:574af3c5-d781-4fa8-842f-04bccc0c5fcf,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:27:03.077204 containerd[1591]: time="2025-12-16T03:27:03.077086116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjkcx,Uid:d069b10a-0bb5-4869-a283-bc34fbcea4f8,Namespace:calico-system,Attempt:0,}" Dec 16 03:27:03.281126 systemd-networkd[1501]: cali5694dce1445: Gained IPv6LL Dec 16 03:27:03.402260 kubelet[2854]: E1216 03:27:03.402206 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2xvkp" podUID="964e7c44-1e18-4e5b-8b6a-1130081b8647" Dec 16 03:27:03.456641 systemd-networkd[1501]: calie82848969cf: Link UP Dec 16 03:27:03.458400 systemd-networkd[1501]: calie82848969cf: Gained carrier Dec 16 03:27:03.481169 kubelet[2854]: I1216 03:27:03.480393 2854 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-md7c7" podStartSLOduration=45.480362026 podStartE2EDuration="45.480362026s" podCreationTimestamp="2025-12-16 03:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:27:03.453606233 +0000 UTC m=+49.643807093" watchObservedRunningTime="2025-12-16 03:27:03.480362026 +0000 UTC m=+49.670562898" Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.211 [INFO][4618] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--qw7fg-eth0 calico-apiserver-984d7f7b9- calico-apiserver 58c0a3f8-05f5-4d50-84f5-6c400d162736 822 0 2025-12-16 03:26:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:984d7f7b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal calico-apiserver-984d7f7b9-qw7fg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie82848969cf [] [] }} ContainerID="fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-qw7fg" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--qw7fg-" Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.212 [INFO][4618] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-qw7fg" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--qw7fg-eth0" Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.326 [INFO][4641] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" HandleID="k8s-pod-network.fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--qw7fg-eth0" Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.328 [INFO][4641] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" HandleID="k8s-pod-network.fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--qw7fg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf800), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", "pod":"calico-apiserver-984d7f7b9-qw7fg", "timestamp":"2025-12-16 03:27:03.326678802 +0000 UTC"}, Hostname:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.328 [INFO][4641] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.328 [INFO][4641] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.328 [INFO][4641] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal' Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.356 [INFO][4641] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.369 [INFO][4641] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.386 [INFO][4641] ipam/ipam.go 511: Trying affinity for 192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.399 [INFO][4641] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.405 [INFO][4641] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.405 [INFO][4641] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.128/26 handle="k8s-pod-network.fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.408 [INFO][4641] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.426 [INFO][4641] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.128/26 handle="k8s-pod-network.fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.441 [INFO][4641] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.134/26] block=192.168.56.128/26 handle="k8s-pod-network.fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.442 [INFO][4641] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.134/26] handle="k8s-pod-network.fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.443 [INFO][4641] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:27:03.492350 containerd[1591]: 2025-12-16 03:27:03.443 [INFO][4641] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.134/26] IPv6=[] ContainerID="fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" HandleID="k8s-pod-network.fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--qw7fg-eth0" Dec 16 03:27:03.498370 containerd[1591]: 2025-12-16 03:27:03.449 [INFO][4618] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-qw7fg" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--qw7fg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--qw7fg-eth0", GenerateName:"calico-apiserver-984d7f7b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"58c0a3f8-05f5-4d50-84f5-6c400d162736", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"984d7f7b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-984d7f7b9-qw7fg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie82848969cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:27:03.498370 containerd[1591]: 2025-12-16 03:27:03.449 [INFO][4618] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.134/32] ContainerID="fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-qw7fg" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--qw7fg-eth0" Dec 16 03:27:03.498370 containerd[1591]: 2025-12-16 03:27:03.449 [INFO][4618] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie82848969cf ContainerID="fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-qw7fg" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--qw7fg-eth0" Dec 16 03:27:03.498370 containerd[1591]: 2025-12-16 03:27:03.460 [INFO][4618] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-qw7fg" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--qw7fg-eth0" Dec 16 03:27:03.498370 containerd[1591]: 2025-12-16 03:27:03.462 [INFO][4618] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-qw7fg" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--qw7fg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--qw7fg-eth0", GenerateName:"calico-apiserver-984d7f7b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"58c0a3f8-05f5-4d50-84f5-6c400d162736", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"984d7f7b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a", Pod:"calico-apiserver-984d7f7b9-qw7fg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie82848969cf", MAC:"6e:db:e4:ef:33:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:27:03.498370 containerd[1591]: 2025-12-16 03:27:03.479 [INFO][4618] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-qw7fg" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--qw7fg-eth0" Dec 16 03:27:03.537163 systemd-networkd[1501]: cali348b61d0188: Gained IPv6LL Dec 16 03:27:03.568418 containerd[1591]: time="2025-12-16T03:27:03.568347655Z" level=info msg="connecting to shim fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a" address="unix:///run/containerd/s/3402a9400a3e39c38369b6ee2ba9e9798b95cb0ec1f32dbc5078275c8bc28b66" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:27:03.597070 systemd-networkd[1501]: caliede019a17be: Link UP Dec 16 03:27:03.598475 systemd-networkd[1501]: caliede019a17be: Gained carrier Dec 16 03:27:03.614000 audit[4688]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=4688 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:03.614000 audit[4688]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffde15f3b50 a2=0 a3=7ffde15f3b3c items=0 ppid=2998 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:03.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:03.635000 audit[4688]: NETFILTER_CFG table=nat:138 family=2 entries=44 op=nft_register_rule pid=4688 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:03.635000 audit[4688]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffde15f3b50 a2=0 a3=7ffde15f3b3c items=0 ppid=2998 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:03.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:03.648000 audit[4689]: NETFILTER_CFG table=filter:139 family=2 entries=66 op=nft_register_chain pid=4689 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:27:03.648000 audit[4689]: SYSCALL arch=c000003e syscall=46 success=yes exit=32960 a0=3 a1=7ffcdcac3a50 a2=0 a3=7ffcdcac3a3c items=0 ppid=3998 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:03.648000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.279 [INFO][4604] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--k94jh-eth0 calico-apiserver-984d7f7b9- calico-apiserver 574af3c5-d781-4fa8-842f-04bccc0c5fcf 826 0 2025-12-16 03:26:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:984d7f7b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal calico-apiserver-984d7f7b9-k94jh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliede019a17be [] [] }} ContainerID="31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-k94jh" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--k94jh-" Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.279 [INFO][4604] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-k94jh" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--k94jh-eth0" Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.403 [INFO][4652] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" HandleID="k8s-pod-network.31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--k94jh-eth0" Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.403 [INFO][4652] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" HandleID="k8s-pod-network.31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--k94jh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", "pod":"calico-apiserver-984d7f7b9-k94jh", "timestamp":"2025-12-16 03:27:03.403118728 +0000 UTC"}, Hostname:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.403 [INFO][4652] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.443 [INFO][4652] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.443 [INFO][4652] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal' Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.468 [INFO][4652] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.486 [INFO][4652] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.497 [INFO][4652] ipam/ipam.go 511: Trying affinity for 192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.513 [INFO][4652] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.519 [INFO][4652] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.520 [INFO][4652] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.128/26 handle="k8s-pod-network.31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.526 [INFO][4652] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996 Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.536 [INFO][4652] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.128/26 handle="k8s-pod-network.31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.573 [INFO][4652] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.135/26] block=192.168.56.128/26 handle="k8s-pod-network.31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.575 [INFO][4652] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.135/26] handle="k8s-pod-network.31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.575 [INFO][4652] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:27:03.676994 containerd[1591]: 2025-12-16 03:27:03.575 [INFO][4652] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.135/26] IPv6=[] ContainerID="31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" HandleID="k8s-pod-network.31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--k94jh-eth0" Dec 16 03:27:03.678143 containerd[1591]: 2025-12-16 03:27:03.584 [INFO][4604] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-k94jh" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--k94jh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--k94jh-eth0", GenerateName:"calico-apiserver-984d7f7b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"574af3c5-d781-4fa8-842f-04bccc0c5fcf", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"984d7f7b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-984d7f7b9-k94jh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliede019a17be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:27:03.678143 containerd[1591]: 2025-12-16 03:27:03.584 [INFO][4604] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.135/32] ContainerID="31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-k94jh" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--k94jh-eth0" Dec 16 03:27:03.678143 containerd[1591]: 2025-12-16 03:27:03.584 [INFO][4604] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliede019a17be ContainerID="31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-k94jh" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--k94jh-eth0" Dec 16 03:27:03.678143 containerd[1591]: 2025-12-16 03:27:03.599 [INFO][4604] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-k94jh" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--k94jh-eth0" Dec 16 03:27:03.678143 containerd[1591]: 2025-12-16 03:27:03.600 [INFO][4604] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-k94jh" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--k94jh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--k94jh-eth0", GenerateName:"calico-apiserver-984d7f7b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"574af3c5-d781-4fa8-842f-04bccc0c5fcf", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"984d7f7b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996", Pod:"calico-apiserver-984d7f7b9-k94jh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.56.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliede019a17be", MAC:"66:72:3f:4a:e9:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:27:03.678143 containerd[1591]: 2025-12-16 03:27:03.640 [INFO][4604] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" Namespace="calico-apiserver" Pod="calico-apiserver-984d7f7b9-k94jh" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-calico--apiserver--984d7f7b9--k94jh-eth0" Dec 16 03:27:03.753204 systemd[1]: Started cri-containerd-fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a.scope - libcontainer container fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a. Dec 16 03:27:03.766171 containerd[1591]: time="2025-12-16T03:27:03.766092860Z" level=info msg="connecting to shim 31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996" address="unix:///run/containerd/s/2005f95958686956070d25d293866cafb77628882f319d306755135dd9ea4cb9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:27:03.796133 systemd-networkd[1501]: calie89f92e7c52: Link UP Dec 16 03:27:03.798593 systemd-networkd[1501]: calie89f92e7c52: Gained carrier Dec 16 03:27:03.830000 audit: BPF prog-id=244 op=LOAD Dec 16 03:27:03.831000 audit: BPF prog-id=245 op=LOAD Dec 16 03:27:03.831000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4681 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:03.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661376636626430333936353266363764316131386566623530333333 Dec 16 03:27:03.831000 audit: BPF prog-id=245 op=UNLOAD Dec 16 03:27:03.831000 audit[4698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4681 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:03.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661376636626430333936353266363764316131386566623530333333 Dec 16 03:27:03.831000 audit: BPF prog-id=246 op=LOAD Dec 16 03:27:03.831000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4681 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:03.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661376636626430333936353266363764316131386566623530333333 Dec 16 03:27:03.831000 audit: BPF prog-id=247 op=LOAD Dec 16 03:27:03.831000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4681 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:03.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661376636626430333936353266363764316131386566623530333333 Dec 16 03:27:03.831000 audit: BPF prog-id=247 op=UNLOAD Dec 16 03:27:03.831000 audit[4698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4681 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:03.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661376636626430333936353266363764316131386566623530333333 Dec 16 03:27:03.831000 audit: BPF prog-id=246 op=UNLOAD Dec 16 03:27:03.831000 audit[4698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4681 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:03.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661376636626430333936353266363764316131386566623530333333 Dec 16 03:27:03.831000 audit: BPF prog-id=248 op=LOAD Dec 16 03:27:03.831000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4681 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:03.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661376636626430333936353266363764316131386566623530333333 Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.268 [INFO][4621] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-csi--node--driver--mjkcx-eth0 csi-node-driver- calico-system d069b10a-0bb5-4869-a283-bc34fbcea4f8 712 0 2025-12-16 03:26:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal csi-node-driver-mjkcx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie89f92e7c52 [] [] }} ContainerID="c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" Namespace="calico-system" Pod="csi-node-driver-mjkcx" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-csi--node--driver--mjkcx-" Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.272 [INFO][4621] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" Namespace="calico-system" Pod="csi-node-driver-mjkcx" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-csi--node--driver--mjkcx-eth0" Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.422 [INFO][4650] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" HandleID="k8s-pod-network.c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-csi--node--driver--mjkcx-eth0" Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.422 [INFO][4650] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" HandleID="k8s-pod-network.c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-csi--node--driver--mjkcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033d310), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", "pod":"csi-node-driver-mjkcx", "timestamp":"2025-12-16 03:27:03.422068673 +0000 UTC"}, Hostname:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.422 [INFO][4650] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.575 [INFO][4650] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.575 [INFO][4650] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal' Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.628 [INFO][4650] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.666 [INFO][4650] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.691 [INFO][4650] ipam/ipam.go 511: Trying affinity for 192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.696 [INFO][4650] ipam/ipam.go 158: Attempting to load block cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.703 [INFO][4650] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.56.128/26 host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.703 [INFO][4650] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.56.128/26 handle="k8s-pod-network.c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.712 [INFO][4650] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68 Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.740 [INFO][4650] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.56.128/26 handle="k8s-pod-network.c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.764 [INFO][4650] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.56.136/26] block=192.168.56.128/26 handle="k8s-pod-network.c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.765 [INFO][4650] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.56.136/26] handle="k8s-pod-network.c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" host="ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal" Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.765 [INFO][4650] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:27:03.856381 containerd[1591]: 2025-12-16 03:27:03.768 [INFO][4650] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.56.136/26] IPv6=[] ContainerID="c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" HandleID="k8s-pod-network.c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" Workload="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-csi--node--driver--mjkcx-eth0" Dec 16 03:27:03.859313 containerd[1591]: 2025-12-16 03:27:03.775 [INFO][4621] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" Namespace="calico-system" Pod="csi-node-driver-mjkcx" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-csi--node--driver--mjkcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-csi--node--driver--mjkcx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d069b10a-0bb5-4869-a283-bc34fbcea4f8", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-mjkcx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie89f92e7c52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:27:03.859313 containerd[1591]: 2025-12-16 03:27:03.776 [INFO][4621] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.56.136/32] ContainerID="c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" Namespace="calico-system" Pod="csi-node-driver-mjkcx" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-csi--node--driver--mjkcx-eth0" Dec 16 03:27:03.859313 containerd[1591]: 2025-12-16 03:27:03.779 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie89f92e7c52 ContainerID="c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" Namespace="calico-system" Pod="csi-node-driver-mjkcx" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-csi--node--driver--mjkcx-eth0" Dec 16 03:27:03.859313 containerd[1591]: 2025-12-16 03:27:03.805 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" Namespace="calico-system" Pod="csi-node-driver-mjkcx" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-csi--node--driver--mjkcx-eth0" Dec 16 03:27:03.859313 containerd[1591]: 2025-12-16 03:27:03.807 [INFO][4621] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" Namespace="calico-system" Pod="csi-node-driver-mjkcx" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-csi--node--driver--mjkcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-csi--node--driver--mjkcx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d069b10a-0bb5-4869-a283-bc34fbcea4f8", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-a19d081ede232f2cf395.c.flatcar-212911.internal", ContainerID:"c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68", Pod:"csi-node-driver-mjkcx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.56.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie89f92e7c52", MAC:"5a:8d:8b:8e:93:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:27:03.859313 containerd[1591]: 2025-12-16 03:27:03.841 [INFO][4621] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" Namespace="calico-system" Pod="csi-node-driver-mjkcx" WorkloadEndpoint="ci--4547--0--0--a19d081ede232f2cf395.c.flatcar--212911.internal-k8s-csi--node--driver--mjkcx-eth0" Dec 16 03:27:03.875521 systemd[1]: Started cri-containerd-31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996.scope - libcontainer container 31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996. Dec 16 03:27:03.921937 containerd[1591]: time="2025-12-16T03:27:03.920153493Z" level=info msg="connecting to shim c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68" address="unix:///run/containerd/s/cb85ae000687f3d10206ca4ccb5e04c4431307e320263ca3b4e95647e35c5f87" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:27:03.976252 systemd[1]: Started cri-containerd-c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68.scope - libcontainer container c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68. Dec 16 03:27:04.013000 audit: BPF prog-id=249 op=LOAD Dec 16 03:27:04.018000 audit: BPF prog-id=250 op=LOAD Dec 16 03:27:04.018000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4722 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331663561383739323766303463373930653832306239653139323435 Dec 16 03:27:04.018000 audit: BPF prog-id=250 op=UNLOAD Dec 16 03:27:04.018000 audit[4739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4722 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331663561383739323766303463373930653832306239653139323435 Dec 16 03:27:04.019000 audit: BPF prog-id=251 op=LOAD Dec 16 03:27:04.019000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4722 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331663561383739323766303463373930653832306239653139323435 Dec 16 03:27:04.019000 audit: BPF prog-id=252 op=LOAD Dec 16 03:27:04.019000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4722 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331663561383739323766303463373930653832306239653139323435 Dec 16 03:27:04.019000 audit: BPF prog-id=252 op=UNLOAD Dec 16 03:27:04.019000 audit[4739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4722 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331663561383739323766303463373930653832306239653139323435 Dec 16 03:27:04.019000 audit: BPF prog-id=251 op=UNLOAD Dec 16 03:27:04.019000 audit[4739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4722 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331663561383739323766303463373930653832306239653139323435 Dec 16 03:27:04.020000 audit: BPF prog-id=253 op=LOAD Dec 16 03:27:04.020000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4722 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331663561383739323766303463373930653832306239653139323435 Dec 16 03:27:04.023000 audit[4799]: NETFILTER_CFG table=filter:140 family=2 entries=63 op=nft_register_chain pid=4799 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:27:04.023000 audit[4799]: SYSCALL arch=c000003e syscall=46 success=yes exit=30680 a0=3 a1=7ffcbe3f6400 a2=0 a3=7ffcbe3f63ec items=0 ppid=3998 pid=4799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.023000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:27:04.045737 containerd[1591]: time="2025-12-16T03:27:04.045580932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-984d7f7b9-qw7fg,Uid:58c0a3f8-05f5-4d50-84f5-6c400d162736,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fa7f6bd039652f67d1a18efb5033333c3271c083eb16b18243398e5d9729706a\"" Dec 16 03:27:04.050643 containerd[1591]: time="2025-12-16T03:27:04.050518733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:27:04.120000 audit: BPF prog-id=254 op=LOAD Dec 16 03:27:04.121000 audit: BPF prog-id=255 op=LOAD Dec 16 03:27:04.121000 audit[4788]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4775 pid=4788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334383832653465656132663232396639663432323266346362643532 Dec 16 03:27:04.121000 audit: BPF prog-id=255 op=UNLOAD Dec 16 03:27:04.121000 audit[4788]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4775 pid=4788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334383832653465656132663232396639663432323266346362643532 Dec 16 03:27:04.121000 audit: BPF prog-id=256 op=LOAD Dec 16 03:27:04.121000 audit[4788]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4775 pid=4788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334383832653465656132663232396639663432323266346362643532 Dec 16 03:27:04.122000 audit: BPF prog-id=257 op=LOAD Dec 16 03:27:04.122000 audit[4788]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4775 pid=4788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334383832653465656132663232396639663432323266346362643532 Dec 16 03:27:04.122000 audit: BPF prog-id=257 op=UNLOAD Dec 16 03:27:04.122000 audit[4788]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4775 pid=4788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334383832653465656132663232396639663432323266346362643532 Dec 16 03:27:04.122000 audit: BPF prog-id=256 op=UNLOAD Dec 16 03:27:04.122000 audit[4788]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4775 pid=4788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334383832653465656132663232396639663432323266346362643532 Dec 16 03:27:04.122000 audit: BPF prog-id=258 op=LOAD Dec 16 03:27:04.122000 audit[4788]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4775 pid=4788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334383832653465656132663232396639663432323266346362643532 Dec 16 03:27:04.161906 containerd[1591]: time="2025-12-16T03:27:04.161851743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mjkcx,Uid:d069b10a-0bb5-4869-a283-bc34fbcea4f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4882e4eea2f229f9f4222f4cbd52f422624dbfc02c4eb5a6f7b1786d2abfc68\"" Dec 16 03:27:04.174000 audit[4817]: NETFILTER_CFG table=filter:141 family=2 entries=40 op=nft_register_chain pid=4817 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:27:04.174000 audit[4817]: SYSCALL arch=c000003e syscall=46 success=yes exit=20784 a0=3 a1=7ffd1eda7550 a2=0 a3=7ffd1eda753c items=0 ppid=3998 pid=4817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.174000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:27:04.210410 containerd[1591]: time="2025-12-16T03:27:04.210125941Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:04.212181 containerd[1591]: time="2025-12-16T03:27:04.211594320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:27:04.212181 containerd[1591]: time="2025-12-16T03:27:04.211646617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:04.212981 kubelet[2854]: E1216 03:27:04.212936 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:27:04.214012 kubelet[2854]: E1216 03:27:04.213944 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:27:04.214261 kubelet[2854]: E1216 03:27:04.214192 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7t6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-984d7f7b9-qw7fg_calico-apiserver(58c0a3f8-05f5-4d50-84f5-6c400d162736): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:04.214900 containerd[1591]: time="2025-12-16T03:27:04.214768458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:27:04.216367 kubelet[2854]: E1216 03:27:04.216220 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-qw7fg" podUID="58c0a3f8-05f5-4d50-84f5-6c400d162736" Dec 16 03:27:04.266529 containerd[1591]: time="2025-12-16T03:27:04.266417131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-984d7f7b9-k94jh,Uid:574af3c5-d781-4fa8-842f-04bccc0c5fcf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"31f5a87927f04c790e820b9e19245ba75c1b2b055223588ae87c77a559116996\"" Dec 16 03:27:04.381932 containerd[1591]: time="2025-12-16T03:27:04.381729480Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:04.384002 containerd[1591]: time="2025-12-16T03:27:04.383955486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:27:04.384276 containerd[1591]: time="2025-12-16T03:27:04.384251162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:04.384778 kubelet[2854]: E1216 03:27:04.384551 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:27:04.384778 kubelet[2854]: E1216 03:27:04.384732 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:27:04.385214 kubelet[2854]: E1216 03:27:04.385009 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrxhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mjkcx_calico-system(d069b10a-0bb5-4869-a283-bc34fbcea4f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:04.386328 containerd[1591]: time="2025-12-16T03:27:04.386204802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:27:04.410839 kubelet[2854]: E1216 03:27:04.410713 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-qw7fg" podUID="58c0a3f8-05f5-4d50-84f5-6c400d162736" Dec 16 03:27:04.550090 containerd[1591]: time="2025-12-16T03:27:04.549977933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:04.551603 containerd[1591]: time="2025-12-16T03:27:04.551373409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:27:04.551603 containerd[1591]: time="2025-12-16T03:27:04.551477108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:04.551719 kubelet[2854]: E1216 03:27:04.551670 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:27:04.551820 kubelet[2854]: E1216 03:27:04.551743 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:27:04.553059 containerd[1591]: time="2025-12-16T03:27:04.552995963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:27:04.553182 kubelet[2854]: E1216 03:27:04.553074 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzp8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-984d7f7b9-k94jh_calico-apiserver(574af3c5-d781-4fa8-842f-04bccc0c5fcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:04.554337 kubelet[2854]: E1216 03:27:04.554284 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-k94jh" podUID="574af3c5-d781-4fa8-842f-04bccc0c5fcf" Dec 16 03:27:04.689158 systemd-networkd[1501]: calie82848969cf: Gained IPv6LL Dec 16 03:27:04.729458 containerd[1591]: time="2025-12-16T03:27:04.729381869Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:04.731283 containerd[1591]: time="2025-12-16T03:27:04.730897905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:27:04.731283 containerd[1591]: time="2025-12-16T03:27:04.730968212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:04.731592 kubelet[2854]: E1216 03:27:04.731511 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:27:04.731733 kubelet[2854]: E1216 03:27:04.731655 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:27:04.732316 kubelet[2854]: E1216 03:27:04.732135 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrxhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mjkcx_calico-system(d069b10a-0bb5-4869-a283-bc34fbcea4f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:04.733569 kubelet[2854]: E1216 03:27:04.733517 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mjkcx" podUID="d069b10a-0bb5-4869-a283-bc34fbcea4f8" Dec 16 03:27:04.769000 audit[4835]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=4835 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:04.769000 audit[4835]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff77d07440 a2=0 a3=7fff77d0742c items=0 ppid=2998 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:04.788000 audit[4835]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=4835 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:04.788000 audit[4835]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff77d07440 a2=0 a3=7fff77d0742c items=0 ppid=2998 pid=4835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:04.788000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:05.073373 systemd-networkd[1501]: caliede019a17be: Gained IPv6LL Dec 16 03:27:05.425608 kubelet[2854]: E1216 03:27:05.425557 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-k94jh" podUID="574af3c5-d781-4fa8-842f-04bccc0c5fcf" Dec 16 03:27:05.426480 kubelet[2854]: E1216 03:27:05.426201 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-qw7fg" podUID="58c0a3f8-05f5-4d50-84f5-6c400d162736" Dec 16 03:27:05.427936 kubelet[2854]: E1216 03:27:05.427374 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mjkcx" podUID="d069b10a-0bb5-4869-a283-bc34fbcea4f8" Dec 16 03:27:05.708000 audit[4838]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=4838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:05.708000 audit[4838]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc8fec49a0 a2=0 a3=7ffc8fec498c items=0 ppid=2998 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:05.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:05.713000 audit[4838]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=4838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:27:05.713000 audit[4838]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc8fec49a0 a2=0 a3=7ffc8fec498c items=0 ppid=2998 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:05.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:27:05.778150 systemd-networkd[1501]: calie89f92e7c52: Gained IPv6LL Dec 16 03:27:08.337145 ntpd[1553]: Listen normally on 7 vxlan.calico 192.168.56.128:123 Dec 16 03:27:08.337247 ntpd[1553]: Listen normally on 8 cali8f942d69732 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 03:27:08.337753 ntpd[1553]: 16 Dec 03:27:08 ntpd[1553]: Listen normally on 7 vxlan.calico 192.168.56.128:123 Dec 16 03:27:08.337753 ntpd[1553]: 16 Dec 03:27:08 ntpd[1553]: Listen normally on 8 cali8f942d69732 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 03:27:08.337753 ntpd[1553]: 16 Dec 03:27:08 ntpd[1553]: Listen normally on 9 vxlan.calico [fe80::64d1:7cff:fe38:4a71%5]:123 Dec 16 03:27:08.337753 ntpd[1553]: 16 Dec 03:27:08 ntpd[1553]: Listen normally on 10 cali70f7a5a9b1e [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 03:27:08.337753 ntpd[1553]: 16 Dec 03:27:08 ntpd[1553]: Listen normally on 11 cali1a79d147d88 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 03:27:08.337753 ntpd[1553]: 16 Dec 03:27:08 ntpd[1553]: Listen normally on 12 cali5694dce1445 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 03:27:08.337753 ntpd[1553]: 16 Dec 03:27:08 ntpd[1553]: Listen normally on 13 cali348b61d0188 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 03:27:08.337753 ntpd[1553]: 16 Dec 03:27:08 ntpd[1553]: Listen normally on 14 calie82848969cf [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 03:27:08.337753 ntpd[1553]: 16 Dec 03:27:08 ntpd[1553]: Listen normally on 15 caliede019a17be [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 03:27:08.337753 ntpd[1553]: 16 Dec 03:27:08 ntpd[1553]: Listen normally on 16 calie89f92e7c52 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 03:27:08.337293 ntpd[1553]: Listen normally on 9 vxlan.calico [fe80::64d1:7cff:fe38:4a71%5]:123 Dec 16 03:27:08.337335 ntpd[1553]: Listen normally on 10 cali70f7a5a9b1e [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 03:27:08.337395 ntpd[1553]: Listen normally on 11 cali1a79d147d88 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 03:27:08.337438 ntpd[1553]: Listen normally on 12 cali5694dce1445 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 03:27:08.337476 ntpd[1553]: Listen normally on 13 cali348b61d0188 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 03:27:08.337517 ntpd[1553]: Listen normally on 14 calie82848969cf [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 03:27:08.337558 ntpd[1553]: Listen normally on 15 caliede019a17be [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 03:27:08.337595 ntpd[1553]: Listen normally on 16 calie89f92e7c52 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 03:27:12.085868 containerd[1591]: time="2025-12-16T03:27:12.085799704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:27:12.251040 containerd[1591]: time="2025-12-16T03:27:12.250959417Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:12.254069 containerd[1591]: time="2025-12-16T03:27:12.254012327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:27:12.254189 containerd[1591]: time="2025-12-16T03:27:12.254140022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:12.254474 kubelet[2854]: E1216 03:27:12.254416 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:27:12.255038 kubelet[2854]: E1216 03:27:12.254487 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:27:12.255038 kubelet[2854]: E1216 03:27:12.254657 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f5fe397c66444a048cd1698a856f56dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-86l85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767dd84f9b-gcdgn_calico-system(9487d5f8-7cdd-4f1e-a411-61521d9e1c14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:12.257688 containerd[1591]: time="2025-12-16T03:27:12.257651047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:27:12.411583 containerd[1591]: time="2025-12-16T03:27:12.411518337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:12.413405 containerd[1591]: time="2025-12-16T03:27:12.413249488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:27:12.413405 containerd[1591]: time="2025-12-16T03:27:12.413364528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:12.413902 kubelet[2854]: E1216 03:27:12.413852 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:27:12.414198 kubelet[2854]: E1216 03:27:12.414085 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:27:12.415991 kubelet[2854]: E1216 03:27:12.415890 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86l85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767dd84f9b-gcdgn_calico-system(9487d5f8-7cdd-4f1e-a411-61521d9e1c14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:12.417472 kubelet[2854]: E1216 03:27:12.417409 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767dd84f9b-gcdgn" podUID="9487d5f8-7cdd-4f1e-a411-61521d9e1c14" Dec 16 03:27:15.078597 containerd[1591]: time="2025-12-16T03:27:15.078161291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:27:15.239503 containerd[1591]: time="2025-12-16T03:27:15.239434156Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:15.242594 containerd[1591]: time="2025-12-16T03:27:15.242537035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:27:15.242776 containerd[1591]: time="2025-12-16T03:27:15.242651646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:15.242900 kubelet[2854]: E1216 03:27:15.242849 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:27:15.244098 kubelet[2854]: E1216 03:27:15.242959 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:27:15.244098 kubelet[2854]: E1216 03:27:15.243171 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hm9d8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2xvkp_calico-system(964e7c44-1e18-4e5b-8b6a-1130081b8647): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:15.244837 kubelet[2854]: E1216 03:27:15.244766 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2xvkp" podUID="964e7c44-1e18-4e5b-8b6a-1130081b8647" Dec 16 03:27:16.078936 containerd[1591]: time="2025-12-16T03:27:16.078187232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:27:16.230687 containerd[1591]: time="2025-12-16T03:27:16.230582310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:16.232701 containerd[1591]: time="2025-12-16T03:27:16.232591953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:27:16.233092 containerd[1591]: time="2025-12-16T03:27:16.232807639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:16.233635 kubelet[2854]: E1216 03:27:16.233516 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:27:16.233635 kubelet[2854]: E1216 03:27:16.233595 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:27:16.233941 kubelet[2854]: E1216 03:27:16.233856 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrxhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mjkcx_calico-system(d069b10a-0bb5-4869-a283-bc34fbcea4f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:16.238478 containerd[1591]: time="2025-12-16T03:27:16.238439111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:27:16.397522 containerd[1591]: time="2025-12-16T03:27:16.397453871Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:16.398992 containerd[1591]: time="2025-12-16T03:27:16.398934156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:16.399657 containerd[1591]: time="2025-12-16T03:27:16.398941135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:27:16.399768 kubelet[2854]: E1216 03:27:16.399528 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:27:16.399768 kubelet[2854]: E1216 03:27:16.399615 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:27:16.400887 kubelet[2854]: E1216 03:27:16.399983 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrxhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mjkcx_calico-system(d069b10a-0bb5-4869-a283-bc34fbcea4f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:16.403307 kubelet[2854]: E1216 03:27:16.402997 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mjkcx" podUID="d069b10a-0bb5-4869-a283-bc34fbcea4f8" Dec 16 03:27:17.078516 containerd[1591]: time="2025-12-16T03:27:17.078086616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:27:17.234150 containerd[1591]: time="2025-12-16T03:27:17.234096202Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:17.236522 containerd[1591]: time="2025-12-16T03:27:17.236436451Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:27:17.236522 containerd[1591]: time="2025-12-16T03:27:17.236485795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:17.237121 kubelet[2854]: E1216 03:27:17.236992 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:27:17.237332 kubelet[2854]: E1216 03:27:17.237093 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:27:17.238170 kubelet[2854]: E1216 03:27:17.238081 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl6xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54c6dbfbf4-h7k9m_calico-system(f116a091-4f95-4334-821a-705964657507): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:17.239711 kubelet[2854]: E1216 03:27:17.239650 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c6dbfbf4-h7k9m" podUID="f116a091-4f95-4334-821a-705964657507" Dec 16 03:27:18.082670 containerd[1591]: time="2025-12-16T03:27:18.081134178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:27:18.238640 containerd[1591]: time="2025-12-16T03:27:18.238586407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:18.240958 containerd[1591]: time="2025-12-16T03:27:18.240788072Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:27:18.240958 containerd[1591]: time="2025-12-16T03:27:18.240857249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:18.241406 kubelet[2854]: E1216 03:27:18.241335 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:27:18.242528 kubelet[2854]: E1216 03:27:18.241939 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:27:18.242528 kubelet[2854]: E1216 03:27:18.242387 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzp8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-984d7f7b9-k94jh_calico-apiserver(574af3c5-d781-4fa8-842f-04bccc0c5fcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:18.243849 kubelet[2854]: E1216 03:27:18.243765 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-k94jh" podUID="574af3c5-d781-4fa8-842f-04bccc0c5fcf" Dec 16 03:27:21.077691 containerd[1591]: time="2025-12-16T03:27:21.076712079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:27:21.233256 containerd[1591]: time="2025-12-16T03:27:21.233192821Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:21.235085 containerd[1591]: time="2025-12-16T03:27:21.234981649Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:27:21.235407 containerd[1591]: time="2025-12-16T03:27:21.235250380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:21.236937 kubelet[2854]: E1216 03:27:21.235640 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:27:21.236937 kubelet[2854]: E1216 03:27:21.235701 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:27:21.236937 kubelet[2854]: E1216 03:27:21.235891 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7t6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-984d7f7b9-qw7fg_calico-apiserver(58c0a3f8-05f5-4d50-84f5-6c400d162736): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:21.237987 kubelet[2854]: E1216 03:27:21.237832 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-qw7fg" podUID="58c0a3f8-05f5-4d50-84f5-6c400d162736" Dec 16 03:27:23.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.16:22-147.75.109.163:48348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:23.696456 systemd[1]: Started sshd@9-10.128.0.16:22-147.75.109.163:48348.service - OpenSSH per-connection server daemon (147.75.109.163:48348). Dec 16 03:27:23.715676 kernel: kauditd_printk_skb: 173 callbacks suppressed Dec 16 03:27:23.715833 kernel: audit: type=1130 audit(1765855643.695:732): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.16:22-147.75.109.163:48348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:24.022000 audit[4863]: USER_ACCT pid=4863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:24.024211 sshd[4863]: Accepted publickey for core from 147.75.109.163 port 48348 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:27:24.027905 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:24.050257 systemd-logind[1566]: New session 11 of user core. Dec 16 03:27:24.055469 kernel: audit: type=1101 audit(1765855644.022:733): pid=4863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:24.058194 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 03:27:24.089035 kernel: audit: type=1103 audit(1765855644.023:734): pid=4863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:24.023000 audit[4863]: CRED_ACQ pid=4863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:24.114414 kernel: audit: type=1006 audit(1765855644.023:735): pid=4863 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 03:27:24.023000 audit[4863]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfb48e680 a2=3 a3=0 items=0 ppid=1 pid=4863 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:24.147195 kernel: audit: type=1300 audit(1765855644.023:735): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfb48e680 a2=3 a3=0 items=0 ppid=1 pid=4863 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:24.023000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:27:24.161072 kernel: audit: type=1327 audit(1765855644.023:735): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:27:24.089000 audit[4863]: USER_START pid=4863 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:24.203014 kernel: audit: type=1105 audit(1765855644.089:736): pid=4863 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:24.094000 audit[4867]: CRED_ACQ pid=4867 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:24.241932 kernel: audit: type=1103 audit(1765855644.094:737): pid=4867 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:24.412378 sshd[4867]: Connection closed by 147.75.109.163 port 48348 Dec 16 03:27:24.415461 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Dec 16 03:27:24.422000 audit[4863]: USER_END pid=4863 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:24.461285 kernel: audit: type=1106 audit(1765855644.422:738): pid=4863 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:24.422000 audit[4863]: CRED_DISP pid=4863 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:24.484530 systemd[1]: sshd@9-10.128.0.16:22-147.75.109.163:48348.service: Deactivated successfully. Dec 16 03:27:24.487300 kernel: audit: type=1104 audit(1765855644.422:739): pid=4863 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:24.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.16:22-147.75.109.163:48348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:24.489172 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 03:27:24.494413 systemd-logind[1566]: Session 11 logged out. Waiting for processes to exit. Dec 16 03:27:24.496727 systemd-logind[1566]: Removed session 11. Dec 16 03:27:25.079804 kubelet[2854]: E1216 03:27:25.079733 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767dd84f9b-gcdgn" podUID="9487d5f8-7cdd-4f1e-a411-61521d9e1c14" Dec 16 03:27:28.079937 kubelet[2854]: E1216 03:27:28.079127 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2xvkp" podUID="964e7c44-1e18-4e5b-8b6a-1130081b8647" Dec 16 03:27:28.081275 kubelet[2854]: E1216 03:27:28.081196 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mjkcx" podUID="d069b10a-0bb5-4869-a283-bc34fbcea4f8" Dec 16 03:27:29.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.16:22-147.75.109.163:48352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:29.475110 systemd[1]: Started sshd@10-10.128.0.16:22-147.75.109.163:48352.service - OpenSSH per-connection server daemon (147.75.109.163:48352). Dec 16 03:27:29.481936 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:27:29.482054 kernel: audit: type=1130 audit(1765855649.474:741): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.16:22-147.75.109.163:48352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:29.802000 audit[4907]: USER_ACCT pid=4907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:29.803605 sshd[4907]: Accepted publickey for core from 147.75.109.163 port 48352 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:27:29.834962 kernel: audit: type=1101 audit(1765855649.802:742): pid=4907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:29.838872 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:29.836000 audit[4907]: CRED_ACQ pid=4907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:29.863466 systemd-logind[1566]: New session 12 of user core. Dec 16 03:27:29.866221 kernel: audit: type=1103 audit(1765855649.836:743): pid=4907 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:29.836000 audit[4907]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3ee631a0 a2=3 a3=0 items=0 ppid=1 pid=4907 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:29.912435 kernel: audit: type=1006 audit(1765855649.836:744): pid=4907 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 03:27:29.912545 kernel: audit: type=1300 audit(1765855649.836:744): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3ee631a0 a2=3 a3=0 items=0 ppid=1 pid=4907 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:29.914180 kernel: audit: type=1327 audit(1765855649.836:744): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:27:29.836000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:27:29.913326 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 03:27:29.927000 audit[4907]: USER_START pid=4907 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:29.965961 kernel: audit: type=1105 audit(1765855649.927:745): pid=4907 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:29.933000 audit[4911]: CRED_ACQ pid=4911 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:29.992977 kernel: audit: type=1103 audit(1765855649.933:746): pid=4911 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:30.083392 kubelet[2854]: E1216 03:27:30.083071 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c6dbfbf4-h7k9m" podUID="f116a091-4f95-4334-821a-705964657507" Dec 16 03:27:30.088933 kubelet[2854]: E1216 03:27:30.086403 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-k94jh" podUID="574af3c5-d781-4fa8-842f-04bccc0c5fcf" Dec 16 03:27:30.187679 sshd[4911]: Connection closed by 147.75.109.163 port 48352 Dec 16 03:27:30.188470 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Dec 16 03:27:30.191000 audit[4907]: USER_END pid=4907 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:30.198297 systemd[1]: sshd@10-10.128.0.16:22-147.75.109.163:48352.service: Deactivated successfully. Dec 16 03:27:30.204142 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 03:27:30.208280 systemd-logind[1566]: Session 12 logged out. Waiting for processes to exit. Dec 16 03:27:30.210305 systemd-logind[1566]: Removed session 12. Dec 16 03:27:30.229955 kernel: audit: type=1106 audit(1765855650.191:747): pid=4907 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:30.230042 kernel: audit: type=1104 audit(1765855650.191:748): pid=4907 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:30.191000 audit[4907]: CRED_DISP pid=4907 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:30.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.16:22-147.75.109.163:48352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:35.246370 systemd[1]: Started sshd@11-10.128.0.16:22-147.75.109.163:60922.service - OpenSSH per-connection server daemon (147.75.109.163:60922). Dec 16 03:27:35.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.16:22-147.75.109.163:60922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:35.252246 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:27:35.252371 kernel: audit: type=1130 audit(1765855655.245:750): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.16:22-147.75.109.163:60922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:35.568000 audit[4924]: USER_ACCT pid=4924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:35.579879 sshd[4924]: Accepted publickey for core from 147.75.109.163 port 60922 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:27:35.583458 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:35.596800 systemd-logind[1566]: New session 13 of user core. Dec 16 03:27:35.578000 audit[4924]: CRED_ACQ pid=4924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:35.627269 kernel: audit: type=1101 audit(1765855655.568:751): pid=4924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:35.627383 kernel: audit: type=1103 audit(1765855655.578:752): pid=4924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:35.640429 kernel: audit: type=1006 audit(1765855655.578:753): pid=4924 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 03:27:35.578000 audit[4924]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0bbde250 a2=3 a3=0 items=0 ppid=1 pid=4924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:35.674509 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 03:27:35.675235 kernel: audit: type=1300 audit(1765855655.578:753): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0bbde250 a2=3 a3=0 items=0 ppid=1 pid=4924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:35.578000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:27:35.682000 audit[4924]: USER_START pid=4924 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:35.728420 kernel: audit: type=1327 audit(1765855655.578:753): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:27:35.729667 kernel: audit: type=1105 audit(1765855655.682:754): pid=4924 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:35.729768 kernel: audit: type=1103 audit(1765855655.700:755): pid=4928 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:35.700000 audit[4928]: CRED_ACQ pid=4928 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:35.890965 sshd[4928]: Connection closed by 147.75.109.163 port 60922 Dec 16 03:27:35.892151 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Dec 16 03:27:35.895000 audit[4924]: USER_END pid=4924 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:35.903510 systemd[1]: sshd@11-10.128.0.16:22-147.75.109.163:60922.service: Deactivated successfully. Dec 16 03:27:35.907364 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 03:27:35.914551 systemd-logind[1566]: Session 13 logged out. Waiting for processes to exit. Dec 16 03:27:35.916713 systemd-logind[1566]: Removed session 13. Dec 16 03:27:35.933957 kernel: audit: type=1106 audit(1765855655.895:756): pid=4924 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:35.934103 kernel: audit: type=1104 audit(1765855655.896:757): pid=4924 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:35.896000 audit[4924]: CRED_DISP pid=4924 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:35.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.16:22-147.75.109.163:60922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:35.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.16:22-147.75.109.163:60928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:35.972066 systemd[1]: Started sshd@12-10.128.0.16:22-147.75.109.163:60928.service - OpenSSH per-connection server daemon (147.75.109.163:60928). Dec 16 03:27:36.079673 containerd[1591]: time="2025-12-16T03:27:36.079290052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:27:36.083816 kubelet[2854]: E1216 03:27:36.083680 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-qw7fg" podUID="58c0a3f8-05f5-4d50-84f5-6c400d162736" Dec 16 03:27:36.237601 containerd[1591]: time="2025-12-16T03:27:36.237456742Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:36.239242 containerd[1591]: time="2025-12-16T03:27:36.239149494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:27:36.239898 containerd[1591]: time="2025-12-16T03:27:36.239166173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:36.240079 kubelet[2854]: E1216 03:27:36.239695 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:27:36.240079 kubelet[2854]: E1216 03:27:36.239871 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:27:36.240611 kubelet[2854]: E1216 03:27:36.240364 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f5fe397c66444a048cd1698a856f56dd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-86l85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767dd84f9b-gcdgn_calico-system(9487d5f8-7cdd-4f1e-a411-61521d9e1c14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:36.243948 containerd[1591]: time="2025-12-16T03:27:36.243877138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:27:36.264000 audit[4942]: USER_ACCT pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:36.265656 sshd[4942]: Accepted publickey for core from 147.75.109.163 port 60928 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:27:36.265000 audit[4942]: CRED_ACQ pid=4942 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:36.265000 audit[4942]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa32c9750 a2=3 a3=0 items=0 ppid=1 pid=4942 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:36.265000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:27:36.267930 sshd-session[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:36.275002 systemd-logind[1566]: New session 14 of user core. Dec 16 03:27:36.281167 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 03:27:36.285000 audit[4942]: USER_START pid=4942 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:36.289000 audit[4946]: CRED_ACQ pid=4946 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:36.405037 containerd[1591]: time="2025-12-16T03:27:36.404960449Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:36.408928 containerd[1591]: time="2025-12-16T03:27:36.407328139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:27:36.408928 containerd[1591]: time="2025-12-16T03:27:36.407437123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:36.409277 kubelet[2854]: E1216 03:27:36.409180 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:27:36.409463 kubelet[2854]: E1216 03:27:36.409294 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:27:36.409463 kubelet[2854]: E1216 03:27:36.409476 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86l85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767dd84f9b-gcdgn_calico-system(9487d5f8-7cdd-4f1e-a411-61521d9e1c14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:36.410856 kubelet[2854]: E1216 03:27:36.410806 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767dd84f9b-gcdgn" podUID="9487d5f8-7cdd-4f1e-a411-61521d9e1c14" Dec 16 03:27:36.545249 sshd[4946]: Connection closed by 147.75.109.163 port 60928 Dec 16 03:27:36.546518 sshd-session[4942]: pam_unix(sshd:session): session closed for user core Dec 16 03:27:36.549000 audit[4942]: USER_END pid=4942 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:36.550000 audit[4942]: CRED_DISP pid=4942 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:36.558709 systemd-logind[1566]: Session 14 logged out. Waiting for processes to exit. Dec 16 03:27:36.560060 systemd[1]: sshd@12-10.128.0.16:22-147.75.109.163:60928.service: Deactivated successfully. Dec 16 03:27:36.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.16:22-147.75.109.163:60928 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:36.566587 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 03:27:36.572407 systemd-logind[1566]: Removed session 14. Dec 16 03:27:36.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.16:22-147.75.109.163:60940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:36.605324 systemd[1]: Started sshd@13-10.128.0.16:22-147.75.109.163:60940.service - OpenSSH per-connection server daemon (147.75.109.163:60940). Dec 16 03:27:36.907000 audit[4956]: USER_ACCT pid=4956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:36.909969 sshd[4956]: Accepted publickey for core from 147.75.109.163 port 60940 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:27:36.909000 audit[4956]: CRED_ACQ pid=4956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:36.909000 audit[4956]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7f959570 a2=3 a3=0 items=0 ppid=1 pid=4956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:36.909000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:27:36.912234 sshd-session[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:36.926989 systemd-logind[1566]: New session 15 of user core. Dec 16 03:27:36.932184 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 03:27:36.938000 audit[4956]: USER_START pid=4956 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:36.943000 audit[4960]: CRED_ACQ pid=4960 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:37.237860 sshd[4960]: Connection closed by 147.75.109.163 port 60940 Dec 16 03:27:37.239191 sshd-session[4956]: pam_unix(sshd:session): session closed for user core Dec 16 03:27:37.243000 audit[4956]: USER_END pid=4956 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:37.244000 audit[4956]: CRED_DISP pid=4956 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:37.251718 systemd-logind[1566]: Session 15 logged out. Waiting for processes to exit. Dec 16 03:27:37.253479 systemd[1]: sshd@13-10.128.0.16:22-147.75.109.163:60940.service: Deactivated successfully. Dec 16 03:27:37.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.16:22-147.75.109.163:60940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:37.259382 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 03:27:37.267442 systemd-logind[1566]: Removed session 15. Dec 16 03:27:41.077391 containerd[1591]: time="2025-12-16T03:27:41.077143673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:27:41.237824 containerd[1591]: time="2025-12-16T03:27:41.237408953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:41.239365 containerd[1591]: time="2025-12-16T03:27:41.239207381Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:27:41.239365 containerd[1591]: time="2025-12-16T03:27:41.239318783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:41.240125 kubelet[2854]: E1216 03:27:41.240060 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:27:41.240625 kubelet[2854]: E1216 03:27:41.240139 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:27:41.240625 kubelet[2854]: E1216 03:27:41.240355 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hm9d8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2xvkp_calico-system(964e7c44-1e18-4e5b-8b6a-1130081b8647): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:41.242024 kubelet[2854]: E1216 03:27:41.241981 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2xvkp" podUID="964e7c44-1e18-4e5b-8b6a-1130081b8647" Dec 16 03:27:42.082514 containerd[1591]: time="2025-12-16T03:27:42.082444214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:27:42.245936 containerd[1591]: time="2025-12-16T03:27:42.245850881Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:42.247539 containerd[1591]: time="2025-12-16T03:27:42.247473233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:27:42.247712 containerd[1591]: time="2025-12-16T03:27:42.247509717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:42.248007 kubelet[2854]: E1216 03:27:42.247954 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:27:42.248586 kubelet[2854]: E1216 03:27:42.248025 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:27:42.248586 kubelet[2854]: E1216 03:27:42.248204 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrxhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mjkcx_calico-system(d069b10a-0bb5-4869-a283-bc34fbcea4f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:42.251189 containerd[1591]: time="2025-12-16T03:27:42.251139265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:27:42.294353 systemd[1]: Started sshd@14-10.128.0.16:22-147.75.109.163:60946.service - OpenSSH per-connection server daemon (147.75.109.163:60946). Dec 16 03:27:42.307966 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 03:27:42.308065 kernel: audit: type=1130 audit(1765855662.295:777): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.16:22-147.75.109.163:60946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:42.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.16:22-147.75.109.163:60946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:42.424936 containerd[1591]: time="2025-12-16T03:27:42.424569343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:42.426669 containerd[1591]: time="2025-12-16T03:27:42.426597335Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:27:42.426830 containerd[1591]: time="2025-12-16T03:27:42.426735527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:42.427408 kubelet[2854]: E1216 03:27:42.427149 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:27:42.427563 kubelet[2854]: E1216 03:27:42.427426 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:27:42.428931 kubelet[2854]: E1216 03:27:42.428298 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrxhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mjkcx_calico-system(d069b10a-0bb5-4869-a283-bc34fbcea4f8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:42.430931 kubelet[2854]: E1216 03:27:42.430544 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mjkcx" podUID="d069b10a-0bb5-4869-a283-bc34fbcea4f8" Dec 16 03:27:42.625517 sshd[4982]: Accepted publickey for core from 147.75.109.163 port 60946 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:27:42.624000 audit[4982]: USER_ACCT pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:42.631570 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:42.627000 audit[4982]: CRED_ACQ pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:42.683549 kernel: audit: type=1101 audit(1765855662.624:778): pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:42.683694 kernel: audit: type=1103 audit(1765855662.627:779): pid=4982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:42.705024 kernel: audit: type=1006 audit(1765855662.627:780): pid=4982 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 03:27:42.705147 kernel: audit: type=1300 audit(1765855662.627:780): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec43bc980 a2=3 a3=0 items=0 ppid=1 pid=4982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:42.627000 audit[4982]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec43bc980 a2=3 a3=0 items=0 ppid=1 pid=4982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:42.700238 systemd-logind[1566]: New session 16 of user core. Dec 16 03:27:42.627000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:27:42.744406 kernel: audit: type=1327 audit(1765855662.627:780): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:27:42.743204 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 03:27:42.754000 audit[4982]: USER_START pid=4982 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:42.792014 kernel: audit: type=1105 audit(1765855662.754:781): pid=4982 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:42.793000 audit[4986]: CRED_ACQ pid=4986 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:42.824955 kernel: audit: type=1103 audit(1765855662.793:782): pid=4986 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:43.078774 containerd[1591]: time="2025-12-16T03:27:43.077552107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:27:43.078940 sshd[4986]: Connection closed by 147.75.109.163 port 60946 Dec 16 03:27:43.078654 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Dec 16 03:27:43.088000 audit[4982]: USER_END pid=4982 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:43.125963 kernel: audit: type=1106 audit(1765855663.088:783): pid=4982 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:43.131226 systemd[1]: sshd@14-10.128.0.16:22-147.75.109.163:60946.service: Deactivated successfully. Dec 16 03:27:43.124000 audit[4982]: CRED_DISP pid=4982 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:43.138290 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 03:27:43.143065 systemd-logind[1566]: Session 16 logged out. Waiting for processes to exit. Dec 16 03:27:43.145692 systemd-logind[1566]: Removed session 16. Dec 16 03:27:43.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.16:22-147.75.109.163:60946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:43.159203 kernel: audit: type=1104 audit(1765855663.124:784): pid=4982 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:43.296316 containerd[1591]: time="2025-12-16T03:27:43.296232884Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:43.298056 containerd[1591]: time="2025-12-16T03:27:43.297980988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:43.298580 containerd[1591]: time="2025-12-16T03:27:43.298163083Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:27:43.299109 kubelet[2854]: E1216 03:27:43.299043 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:27:43.300868 kubelet[2854]: E1216 03:27:43.299681 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:27:43.302147 kubelet[2854]: E1216 03:27:43.302010 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzp8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-984d7f7b9-k94jh_calico-apiserver(574af3c5-d781-4fa8-842f-04bccc0c5fcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:43.303290 kubelet[2854]: E1216 03:27:43.303243 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-k94jh" podUID="574af3c5-d781-4fa8-842f-04bccc0c5fcf" Dec 16 03:27:45.077414 containerd[1591]: time="2025-12-16T03:27:45.077362124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:27:45.261384 containerd[1591]: time="2025-12-16T03:27:45.261325132Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:45.262830 containerd[1591]: time="2025-12-16T03:27:45.262769954Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:27:45.263007 containerd[1591]: time="2025-12-16T03:27:45.262887174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:45.263248 kubelet[2854]: E1216 03:27:45.263175 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:27:45.263745 kubelet[2854]: E1216 03:27:45.263248 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:27:45.263745 kubelet[2854]: E1216 03:27:45.263441 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl6xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54c6dbfbf4-h7k9m_calico-system(f116a091-4f95-4334-821a-705964657507): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:45.265289 kubelet[2854]: E1216 03:27:45.265241 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c6dbfbf4-h7k9m" podUID="f116a091-4f95-4334-821a-705964657507" Dec 16 03:27:47.078255 containerd[1591]: time="2025-12-16T03:27:47.078064811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:27:47.233942 containerd[1591]: time="2025-12-16T03:27:47.233696241Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:27:47.235937 containerd[1591]: time="2025-12-16T03:27:47.235617932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:27:47.235937 containerd[1591]: time="2025-12-16T03:27:47.235740094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:27:47.236148 kubelet[2854]: E1216 03:27:47.236077 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:27:47.236645 kubelet[2854]: E1216 03:27:47.236152 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:27:47.237944 kubelet[2854]: E1216 03:27:47.236577 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7t6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-984d7f7b9-qw7fg_calico-apiserver(58c0a3f8-05f5-4d50-84f5-6c400d162736): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:27:47.238179 kubelet[2854]: E1216 03:27:47.238085 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-qw7fg" podUID="58c0a3f8-05f5-4d50-84f5-6c400d162736" Dec 16 03:27:48.078219 kubelet[2854]: E1216 03:27:48.078033 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767dd84f9b-gcdgn" podUID="9487d5f8-7cdd-4f1e-a411-61521d9e1c14" Dec 16 03:27:48.164985 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:27:48.165126 kernel: audit: type=1130 audit(1765855668.135:786): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.16:22-147.75.109.163:39476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:48.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.16:22-147.75.109.163:39476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:48.135729 systemd[1]: Started sshd@15-10.128.0.16:22-147.75.109.163:39476.service - OpenSSH per-connection server daemon (147.75.109.163:39476). Dec 16 03:27:48.455000 audit[5000]: USER_ACCT pid=5000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:48.487944 kernel: audit: type=1101 audit(1765855668.455:787): pid=5000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:48.490032 sshd[5000]: Accepted publickey for core from 147.75.109.163 port 39476 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:27:48.495008 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:48.490000 audit[5000]: CRED_ACQ pid=5000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:48.510747 systemd-logind[1566]: New session 17 of user core. Dec 16 03:27:48.527937 kernel: audit: type=1103 audit(1765855668.490:788): pid=5000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:48.533241 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 03:27:48.552169 kernel: audit: type=1006 audit(1765855668.490:789): pid=5000 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 03:27:48.490000 audit[5000]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc07491ab0 a2=3 a3=0 items=0 ppid=1 pid=5000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:48.589965 kernel: audit: type=1300 audit(1765855668.490:789): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc07491ab0 a2=3 a3=0 items=0 ppid=1 pid=5000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:48.490000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:27:48.601938 kernel: audit: type=1327 audit(1765855668.490:789): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:27:48.556000 audit[5000]: USER_START pid=5000 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:48.600000 audit[5004]: CRED_ACQ pid=5004 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:48.663974 kernel: audit: type=1105 audit(1765855668.556:790): pid=5000 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:48.664102 kernel: audit: type=1103 audit(1765855668.600:791): pid=5004 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:48.865192 sshd[5004]: Connection closed by 147.75.109.163 port 39476 Dec 16 03:27:48.867190 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Dec 16 03:27:48.870000 audit[5000]: USER_END pid=5000 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:48.876333 systemd[1]: sshd@15-10.128.0.16:22-147.75.109.163:39476.service: Deactivated successfully. Dec 16 03:27:48.881101 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 03:27:48.907938 kernel: audit: type=1106 audit(1765855668.870:792): pid=5000 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:48.870000 audit[5000]: CRED_DISP pid=5000 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:48.914050 systemd-logind[1566]: Session 17 logged out. Waiting for processes to exit. Dec 16 03:27:48.916387 systemd-logind[1566]: Removed session 17. Dec 16 03:27:48.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.16:22-147.75.109.163:39476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:48.934958 kernel: audit: type=1104 audit(1765855668.870:793): pid=5000 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:53.078211 kubelet[2854]: E1216 03:27:53.078115 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mjkcx" podUID="d069b10a-0bb5-4869-a283-bc34fbcea4f8" Dec 16 03:27:53.920424 systemd[1]: Started sshd@16-10.128.0.16:22-147.75.109.163:53818.service - OpenSSH per-connection server daemon (147.75.109.163:53818). Dec 16 03:27:53.937397 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:27:53.937509 kernel: audit: type=1130 audit(1765855673.920:795): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.16:22-147.75.109.163:53818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:53.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.16:22-147.75.109.163:53818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:54.238000 audit[5018]: USER_ACCT pid=5018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:54.242699 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:27:54.245684 sshd[5018]: Accepted publickey for core from 147.75.109.163 port 53818 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:27:54.265759 systemd-logind[1566]: New session 18 of user core. Dec 16 03:27:54.238000 audit[5018]: CRED_ACQ pid=5018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:54.297116 kernel: audit: type=1101 audit(1765855674.238:796): pid=5018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:54.297217 kernel: audit: type=1103 audit(1765855674.238:797): pid=5018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:54.297263 kernel: audit: type=1006 audit(1765855674.238:798): pid=5018 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 03:27:54.238000 audit[5018]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff10d6d510 a2=3 a3=0 items=0 ppid=1 pid=5018 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:54.343261 kernel: audit: type=1300 audit(1765855674.238:798): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff10d6d510 a2=3 a3=0 items=0 ppid=1 pid=5018 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:27:54.238000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:27:54.354065 kernel: audit: type=1327 audit(1765855674.238:798): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:27:54.355558 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 03:27:54.361000 audit[5018]: USER_START pid=5018 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:54.365000 audit[5022]: CRED_ACQ pid=5022 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:54.424141 kernel: audit: type=1105 audit(1765855674.361:799): pid=5018 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:54.424228 kernel: audit: type=1103 audit(1765855674.365:800): pid=5022 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:54.598898 sshd[5022]: Connection closed by 147.75.109.163 port 53818 Dec 16 03:27:54.599605 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Dec 16 03:27:54.604000 audit[5018]: USER_END pid=5018 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:54.641549 kernel: audit: type=1106 audit(1765855674.604:801): pid=5018 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:54.615000 audit[5018]: CRED_DISP pid=5018 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:54.657374 systemd[1]: sshd@16-10.128.0.16:22-147.75.109.163:53818.service: Deactivated successfully. Dec 16 03:27:54.662679 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 03:27:54.671968 kernel: audit: type=1104 audit(1765855674.615:802): pid=5018 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:27:54.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.16:22-147.75.109.163:53818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:54.673889 systemd-logind[1566]: Session 18 logged out. Waiting for processes to exit. Dec 16 03:27:54.675727 systemd-logind[1566]: Removed session 18. Dec 16 03:27:55.078576 kubelet[2854]: E1216 03:27:55.078260 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-k94jh" podUID="574af3c5-d781-4fa8-842f-04bccc0c5fcf" Dec 16 03:27:55.079235 kubelet[2854]: E1216 03:27:55.078687 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2xvkp" podUID="964e7c44-1e18-4e5b-8b6a-1130081b8647" Dec 16 03:27:59.076904 kubelet[2854]: E1216 03:27:59.076825 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-qw7fg" podUID="58c0a3f8-05f5-4d50-84f5-6c400d162736" Dec 16 03:27:59.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.16:22-147.75.109.163:53832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:27:59.661666 systemd[1]: Started sshd@17-10.128.0.16:22-147.75.109.163:53832.service - OpenSSH per-connection server daemon (147.75.109.163:53832). Dec 16 03:27:59.667405 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:27:59.667524 kernel: audit: type=1130 audit(1765855679.660:804): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.16:22-147.75.109.163:53832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:00.001000 audit[5061]: USER_ACCT pid=5061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:00.004319 sshd[5061]: Accepted publickey for core from 147.75.109.163 port 53832 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:28:00.007393 sshd-session[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:28:00.015860 systemd-logind[1566]: New session 19 of user core. Dec 16 03:28:00.033951 kernel: audit: type=1101 audit(1765855680.001:805): pid=5061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:00.002000 audit[5061]: CRED_ACQ pid=5061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:00.076787 kernel: audit: type=1103 audit(1765855680.002:806): pid=5061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:00.079009 kernel: audit: type=1006 audit(1765855680.002:807): pid=5061 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 03:28:00.077214 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 03:28:00.002000 audit[5061]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3725e6c0 a2=3 a3=0 items=0 ppid=1 pid=5061 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:00.086122 kubelet[2854]: E1216 03:28:00.083976 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767dd84f9b-gcdgn" podUID="9487d5f8-7cdd-4f1e-a411-61521d9e1c14" Dec 16 03:28:00.116384 kernel: audit: type=1300 audit(1765855680.002:807): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3725e6c0 a2=3 a3=0 items=0 ppid=1 pid=5061 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:00.002000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:00.131953 kernel: audit: type=1327 audit(1765855680.002:807): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:00.117000 audit[5061]: USER_START pid=5061 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:00.171935 kernel: audit: type=1105 audit(1765855680.117:808): pid=5061 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:00.120000 audit[5065]: CRED_ACQ pid=5065 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:00.205934 kernel: audit: type=1103 audit(1765855680.120:809): pid=5065 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:00.415242 sshd[5065]: Connection closed by 147.75.109.163 port 53832 Dec 16 03:28:00.416069 sshd-session[5061]: pam_unix(sshd:session): session closed for user core Dec 16 03:28:00.418000 audit[5061]: USER_END pid=5061 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:00.424862 systemd-logind[1566]: Session 19 logged out. Waiting for processes to exit. Dec 16 03:28:00.428244 systemd[1]: sshd@17-10.128.0.16:22-147.75.109.163:53832.service: Deactivated successfully. Dec 16 03:28:00.438028 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 03:28:00.444733 systemd-logind[1566]: Removed session 19. Dec 16 03:28:00.455945 kernel: audit: type=1106 audit(1765855680.418:810): pid=5061 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:00.418000 audit[5061]: CRED_DISP pid=5061 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:00.493028 kernel: audit: type=1104 audit(1765855680.418:811): pid=5061 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:00.492359 systemd[1]: Started sshd@18-10.128.0.16:22-147.75.109.163:53842.service - OpenSSH per-connection server daemon (147.75.109.163:53842). Dec 16 03:28:00.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.16:22-147.75.109.163:53832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:00.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.16:22-147.75.109.163:53842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:00.808000 audit[5078]: USER_ACCT pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:00.811207 sshd[5078]: Accepted publickey for core from 147.75.109.163 port 53842 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:28:00.812000 audit[5078]: CRED_ACQ pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:00.812000 audit[5078]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3e8cca10 a2=3 a3=0 items=0 ppid=1 pid=5078 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:00.812000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:00.816186 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:28:00.828193 systemd-logind[1566]: New session 20 of user core. Dec 16 03:28:00.836151 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 03:28:00.843000 audit[5078]: USER_START pid=5078 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:00.847000 audit[5083]: CRED_ACQ pid=5083 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:01.079763 kubelet[2854]: E1216 03:28:01.078223 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c6dbfbf4-h7k9m" podUID="f116a091-4f95-4334-821a-705964657507" Dec 16 03:28:01.141018 sshd[5083]: Connection closed by 147.75.109.163 port 53842 Dec 16 03:28:01.144145 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Dec 16 03:28:01.145000 audit[5078]: USER_END pid=5078 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:01.145000 audit[5078]: CRED_DISP pid=5078 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:01.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.16:22-147.75.109.163:53842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:01.152545 systemd[1]: sshd@18-10.128.0.16:22-147.75.109.163:53842.service: Deactivated successfully. Dec 16 03:28:01.154745 systemd-logind[1566]: Session 20 logged out. Waiting for processes to exit. Dec 16 03:28:01.157921 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 03:28:01.167094 systemd-logind[1566]: Removed session 20. Dec 16 03:28:01.199654 systemd[1]: Started sshd@19-10.128.0.16:22-147.75.109.163:53852.service - OpenSSH per-connection server daemon (147.75.109.163:53852). Dec 16 03:28:01.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.16:22-147.75.109.163:53852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:01.508000 audit[5093]: USER_ACCT pid=5093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:01.510076 sshd[5093]: Accepted publickey for core from 147.75.109.163 port 53852 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:28:01.511000 audit[5093]: CRED_ACQ pid=5093 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:01.511000 audit[5093]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1df2ac90 a2=3 a3=0 items=0 ppid=1 pid=5093 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:01.511000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:01.515117 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:28:01.532706 systemd-logind[1566]: New session 21 of user core. Dec 16 03:28:01.536157 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 03:28:01.543000 audit[5093]: USER_START pid=5093 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:01.548000 audit[5097]: CRED_ACQ pid=5097 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:02.667249 sshd[5097]: Connection closed by 147.75.109.163 port 53852 Dec 16 03:28:02.666282 sshd-session[5093]: pam_unix(sshd:session): session closed for user core Dec 16 03:28:02.668000 audit[5093]: USER_END pid=5093 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:02.668000 audit[5093]: CRED_DISP pid=5093 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:02.676609 systemd[1]: sshd@19-10.128.0.16:22-147.75.109.163:53852.service: Deactivated successfully. Dec 16 03:28:02.677067 systemd-logind[1566]: Session 21 logged out. Waiting for processes to exit. Dec 16 03:28:02.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.16:22-147.75.109.163:53852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:02.684570 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 03:28:02.691858 systemd-logind[1566]: Removed session 21. Dec 16 03:28:02.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.16:22-147.75.109.163:44944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:02.725048 systemd[1]: Started sshd@20-10.128.0.16:22-147.75.109.163:44944.service - OpenSSH per-connection server daemon (147.75.109.163:44944). Dec 16 03:28:02.753000 audit[5112]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5112 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:02.753000 audit[5112]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe4d269010 a2=0 a3=7ffe4d268ffc items=0 ppid=2998 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:02.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:02.759000 audit[5112]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5112 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:02.759000 audit[5112]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe4d269010 a2=0 a3=0 items=0 ppid=2998 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:02.759000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:02.788000 audit[5116]: NETFILTER_CFG table=filter:148 family=2 entries=38 op=nft_register_rule pid=5116 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:02.788000 audit[5116]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc872de1a0 a2=0 a3=7ffc872de18c items=0 ppid=2998 pid=5116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:02.788000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:02.791000 audit[5116]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5116 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:02.791000 audit[5116]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc872de1a0 a2=0 a3=0 items=0 ppid=2998 pid=5116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:02.791000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:03.021000 audit[5111]: USER_ACCT pid=5111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:03.022777 sshd[5111]: Accepted publickey for core from 147.75.109.163 port 44944 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:28:03.024000 audit[5111]: CRED_ACQ pid=5111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:03.024000 audit[5111]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc5777b70 a2=3 a3=0 items=0 ppid=1 pid=5111 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:03.024000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:03.026559 sshd-session[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:28:03.038133 systemd-logind[1566]: New session 22 of user core. Dec 16 03:28:03.045225 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 03:28:03.054000 audit[5111]: USER_START pid=5111 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:03.058000 audit[5118]: CRED_ACQ pid=5118 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:03.513177 sshd[5118]: Connection closed by 147.75.109.163 port 44944 Dec 16 03:28:03.517418 sshd-session[5111]: pam_unix(sshd:session): session closed for user core Dec 16 03:28:03.519000 audit[5111]: USER_END pid=5111 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:03.520000 audit[5111]: CRED_DISP pid=5111 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:03.525640 systemd-logind[1566]: Session 22 logged out. Waiting for processes to exit. Dec 16 03:28:03.526896 systemd[1]: sshd@20-10.128.0.16:22-147.75.109.163:44944.service: Deactivated successfully. Dec 16 03:28:03.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.16:22-147.75.109.163:44944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:03.531759 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 03:28:03.539323 systemd-logind[1566]: Removed session 22. Dec 16 03:28:03.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.16:22-147.75.109.163:44960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:03.573087 systemd[1]: Started sshd@21-10.128.0.16:22-147.75.109.163:44960.service - OpenSSH per-connection server daemon (147.75.109.163:44960). Dec 16 03:28:03.871000 audit[5128]: USER_ACCT pid=5128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:03.872951 sshd[5128]: Accepted publickey for core from 147.75.109.163 port 44960 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:28:03.875000 audit[5128]: CRED_ACQ pid=5128 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:03.876000 audit[5128]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed70c00b0 a2=3 a3=0 items=0 ppid=1 pid=5128 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:03.876000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:03.878310 sshd-session[5128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:28:03.889518 systemd-logind[1566]: New session 23 of user core. Dec 16 03:28:03.896155 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 03:28:03.903000 audit[5128]: USER_START pid=5128 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:03.909000 audit[5133]: CRED_ACQ pid=5133 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:04.128022 sshd[5133]: Connection closed by 147.75.109.163 port 44960 Dec 16 03:28:04.128720 sshd-session[5128]: pam_unix(sshd:session): session closed for user core Dec 16 03:28:04.131000 audit[5128]: USER_END pid=5128 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:04.132000 audit[5128]: CRED_DISP pid=5128 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:04.139029 systemd[1]: sshd@21-10.128.0.16:22-147.75.109.163:44960.service: Deactivated successfully. Dec 16 03:28:04.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.16:22-147.75.109.163:44960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:04.140269 systemd-logind[1566]: Session 23 logged out. Waiting for processes to exit. Dec 16 03:28:04.144503 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 03:28:04.149952 systemd-logind[1566]: Removed session 23. Dec 16 03:28:06.083422 kubelet[2854]: E1216 03:28:06.083260 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-k94jh" podUID="574af3c5-d781-4fa8-842f-04bccc0c5fcf" Dec 16 03:28:06.086486 kubelet[2854]: E1216 03:28:06.086412 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mjkcx" podUID="d069b10a-0bb5-4869-a283-bc34fbcea4f8" Dec 16 03:28:09.202299 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 03:28:09.202485 kernel: audit: type=1130 audit(1765855689.187:853): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.16:22-147.75.109.163:44972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:09.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.16:22-147.75.109.163:44972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:09.187878 systemd[1]: Started sshd@22-10.128.0.16:22-147.75.109.163:44972.service - OpenSSH per-connection server daemon (147.75.109.163:44972). Dec 16 03:28:09.464000 audit[5150]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=5150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:09.482030 kernel: audit: type=1325 audit(1765855689.464:854): table=filter:150 family=2 entries=26 op=nft_register_rule pid=5150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:09.464000 audit[5150]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcc0ced240 a2=0 a3=7ffcc0ced22c items=0 ppid=2998 pid=5150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:09.464000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:09.531394 kernel: audit: type=1300 audit(1765855689.464:854): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcc0ced240 a2=0 a3=7ffcc0ced22c items=0 ppid=2998 pid=5150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:09.531487 kernel: audit: type=1327 audit(1765855689.464:854): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:09.531534 kernel: audit: type=1325 audit(1765855689.528:855): table=nat:151 family=2 entries=104 op=nft_register_chain pid=5150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:09.528000 audit[5150]: NETFILTER_CFG table=nat:151 family=2 entries=104 op=nft_register_chain pid=5150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:28:09.528000 audit[5150]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffcc0ced240 a2=0 a3=7ffcc0ced22c items=0 ppid=2998 pid=5150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:09.580837 kernel: audit: type=1300 audit(1765855689.528:855): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffcc0ced240 a2=0 a3=7ffcc0ced22c items=0 ppid=2998 pid=5150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:09.528000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:09.589021 sshd[5146]: Accepted publickey for core from 147.75.109.163 port 44972 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:28:09.594633 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:28:09.587000 audit[5146]: USER_ACCT pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:09.628582 kernel: audit: type=1327 audit(1765855689.528:855): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:28:09.628666 kernel: audit: type=1101 audit(1765855689.587:856): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:09.592000 audit[5146]: CRED_ACQ pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:09.657352 kernel: audit: type=1103 audit(1765855689.592:857): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:09.657441 kernel: audit: type=1006 audit(1765855689.592:858): pid=5146 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 03:28:09.592000 audit[5146]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfe647850 a2=3 a3=0 items=0 ppid=1 pid=5146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:09.592000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:09.678460 systemd-logind[1566]: New session 24 of user core. Dec 16 03:28:09.687357 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 03:28:09.693000 audit[5146]: USER_START pid=5146 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:09.696000 audit[5152]: CRED_ACQ pid=5152 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:09.922347 sshd[5152]: Connection closed by 147.75.109.163 port 44972 Dec 16 03:28:09.926223 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Dec 16 03:28:09.928000 audit[5146]: USER_END pid=5146 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:09.929000 audit[5146]: CRED_DISP pid=5146 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:09.937734 systemd[1]: sshd@22-10.128.0.16:22-147.75.109.163:44972.service: Deactivated successfully. Dec 16 03:28:09.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.16:22-147.75.109.163:44972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:09.943279 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 03:28:09.949543 systemd-logind[1566]: Session 24 logged out. Waiting for processes to exit. Dec 16 03:28:09.954556 systemd-logind[1566]: Removed session 24. Dec 16 03:28:10.082098 kubelet[2854]: E1216 03:28:10.082032 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2xvkp" podUID="964e7c44-1e18-4e5b-8b6a-1130081b8647" Dec 16 03:28:11.077726 kubelet[2854]: E1216 03:28:11.077609 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767dd84f9b-gcdgn" podUID="9487d5f8-7cdd-4f1e-a411-61521d9e1c14" Dec 16 03:28:14.080033 kubelet[2854]: E1216 03:28:14.079941 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-qw7fg" podUID="58c0a3f8-05f5-4d50-84f5-6c400d162736" Dec 16 03:28:14.082217 kubelet[2854]: E1216 03:28:14.081143 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c6dbfbf4-h7k9m" podUID="f116a091-4f95-4334-821a-705964657507" Dec 16 03:28:14.994202 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 03:28:14.994341 kernel: audit: type=1130 audit(1765855694.985:864): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.16:22-147.75.109.163:42960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:14.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.16:22-147.75.109.163:42960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:14.986093 systemd[1]: Started sshd@23-10.128.0.16:22-147.75.109.163:42960.service - OpenSSH per-connection server daemon (147.75.109.163:42960). Dec 16 03:28:15.303000 audit[5166]: USER_ACCT pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:15.305697 sshd[5166]: Accepted publickey for core from 147.75.109.163 port 42960 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:28:15.308393 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:28:15.334963 kernel: audit: type=1101 audit(1765855695.303:865): pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:15.303000 audit[5166]: CRED_ACQ pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:15.363968 kernel: audit: type=1103 audit(1765855695.303:866): pid=5166 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:15.365732 systemd-logind[1566]: New session 25 of user core. Dec 16 03:28:15.303000 audit[5166]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb37a15c0 a2=3 a3=0 items=0 ppid=1 pid=5166 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:15.413944 kernel: audit: type=1006 audit(1765855695.303:867): pid=5166 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 03:28:15.414061 kernel: audit: type=1300 audit(1765855695.303:867): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb37a15c0 a2=3 a3=0 items=0 ppid=1 pid=5166 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:15.414113 kernel: audit: type=1327 audit(1765855695.303:867): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:15.303000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:15.425254 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 03:28:15.431000 audit[5166]: USER_START pid=5166 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:15.470125 kernel: audit: type=1105 audit(1765855695.431:868): pid=5166 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:15.470216 kernel: audit: type=1103 audit(1765855695.468:869): pid=5170 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:15.468000 audit[5170]: CRED_ACQ pid=5170 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:15.702122 sshd[5170]: Connection closed by 147.75.109.163 port 42960 Dec 16 03:28:15.703207 sshd-session[5166]: pam_unix(sshd:session): session closed for user core Dec 16 03:28:15.705000 audit[5166]: USER_END pid=5166 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:15.713124 systemd[1]: sshd@23-10.128.0.16:22-147.75.109.163:42960.service: Deactivated successfully. Dec 16 03:28:15.725251 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 03:28:15.745047 kernel: audit: type=1106 audit(1765855695.705:870): pid=5166 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:15.771029 kernel: audit: type=1104 audit(1765855695.705:871): pid=5166 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:15.705000 audit[5166]: CRED_DISP pid=5166 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:15.746383 systemd-logind[1566]: Session 25 logged out. Waiting for processes to exit. Dec 16 03:28:15.749668 systemd-logind[1566]: Removed session 25. Dec 16 03:28:15.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.16:22-147.75.109.163:42960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:20.079681 kubelet[2854]: E1216 03:28:20.079214 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-984d7f7b9-k94jh" podUID="574af3c5-d781-4fa8-842f-04bccc0c5fcf" Dec 16 03:28:20.760113 systemd[1]: Started sshd@24-10.128.0.16:22-147.75.109.163:42970.service - OpenSSH per-connection server daemon (147.75.109.163:42970). Dec 16 03:28:20.790878 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:28:20.790980 kernel: audit: type=1130 audit(1765855700.759:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.16:22-147.75.109.163:42970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:20.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.16:22-147.75.109.163:42970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:21.077169 kubelet[2854]: E1216 03:28:21.077013 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mjkcx" podUID="d069b10a-0bb5-4869-a283-bc34fbcea4f8" Dec 16 03:28:21.102000 audit[5190]: USER_ACCT pid=5190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:21.109158 sshd-session[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:28:21.116589 sshd[5190]: Accepted publickey for core from 147.75.109.163 port 42970 ssh2: RSA SHA256:OxBniy/gByTBlZ9rJUmG99cwz5VFumUnQPpBWaEOpxo Dec 16 03:28:21.134105 kernel: audit: type=1101 audit(1765855701.102:874): pid=5190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:21.102000 audit[5190]: CRED_ACQ pid=5190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:21.161933 kernel: audit: type=1103 audit(1765855701.102:875): pid=5190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:21.171103 systemd-logind[1566]: New session 26 of user core. Dec 16 03:28:21.184942 kernel: audit: type=1006 audit(1765855701.102:876): pid=5190 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 03:28:21.102000 audit[5190]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe632efe10 a2=3 a3=0 items=0 ppid=1 pid=5190 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:21.218096 kernel: audit: type=1300 audit(1765855701.102:876): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe632efe10 a2=3 a3=0 items=0 ppid=1 pid=5190 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:21.218193 kernel: audit: type=1327 audit(1765855701.102:876): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:21.102000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:28:21.219491 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 03:28:21.230000 audit[5190]: USER_START pid=5190 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:21.267946 kernel: audit: type=1105 audit(1765855701.230:877): pid=5190 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:21.268000 audit[5194]: CRED_ACQ pid=5194 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:21.296948 kernel: audit: type=1103 audit(1765855701.268:878): pid=5194 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:21.529952 sshd[5194]: Connection closed by 147.75.109.163 port 42970 Dec 16 03:28:21.531653 sshd-session[5190]: pam_unix(sshd:session): session closed for user core Dec 16 03:28:21.537000 audit[5190]: USER_END pid=5190 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:21.543536 systemd[1]: sshd@24-10.128.0.16:22-147.75.109.163:42970.service: Deactivated successfully. Dec 16 03:28:21.544073 systemd-logind[1566]: Session 26 logged out. Waiting for processes to exit. Dec 16 03:28:21.549226 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 03:28:21.554698 systemd-logind[1566]: Removed session 26. Dec 16 03:28:21.537000 audit[5190]: CRED_DISP pid=5190 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:21.599471 kernel: audit: type=1106 audit(1765855701.537:879): pid=5190 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:21.599559 kernel: audit: type=1104 audit(1765855701.537:880): pid=5190 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:28:21.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.16:22-147.75.109.163:42970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:23.077464 containerd[1591]: time="2025-12-16T03:28:23.077390555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:28:23.232068 containerd[1591]: time="2025-12-16T03:28:23.231979843Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:28:23.233855 containerd[1591]: time="2025-12-16T03:28:23.233791458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:28:23.234111 containerd[1591]: time="2025-12-16T03:28:23.233821697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:28:23.234281 kubelet[2854]: E1216 03:28:23.234231 2854 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:28:23.234760 kubelet[2854]: E1216 03:28:23.234327 2854 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:28:23.235287 kubelet[2854]: E1216 03:28:23.234763 2854 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hm9d8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2xvkp_calico-system(964e7c44-1e18-4e5b-8b6a-1130081b8647): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:28:23.236312 kubelet[2854]: E1216 03:28:23.236069 2854 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2xvkp" podUID="964e7c44-1e18-4e5b-8b6a-1130081b8647"