Dec 16 03:28:37.652785 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:18:19 -00 2025 Dec 16 03:28:37.652816 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:28:37.652829 kernel: BIOS-provided physical RAM map: Dec 16 03:28:37.652837 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 03:28:37.652846 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Dec 16 03:28:37.652860 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 16 03:28:37.652879 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 16 03:28:37.652887 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 16 03:28:37.652894 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 16 03:28:37.652909 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 16 03:28:37.652934 kernel: NX (Execute Disable) protection: active Dec 16 03:28:37.652943 kernel: APIC: Static calls initialized Dec 16 03:28:37.652950 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Dec 16 03:28:37.652958 kernel: extended physical RAM map: Dec 16 03:28:37.652967 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 03:28:37.652978 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Dec 16 03:28:37.652986 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Dec 16 03:28:37.652995 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Dec 16 03:28:37.653003 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 16 03:28:37.653011 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 16 03:28:37.653019 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 16 03:28:37.653027 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 16 03:28:37.653035 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 16 03:28:37.653043 kernel: efi: EFI v2.7 by EDK II Dec 16 03:28:37.653051 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Dec 16 03:28:37.653062 kernel: secureboot: Secure boot disabled Dec 16 03:28:37.653070 kernel: SMBIOS 2.7 present. Dec 16 03:28:37.653078 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 16 03:28:37.653086 kernel: DMI: Memory slots populated: 1/1 Dec 16 03:28:37.653094 kernel: Hypervisor detected: KVM Dec 16 03:28:37.653102 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 16 03:28:37.653110 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 03:28:37.653118 kernel: kvm-clock: using sched offset of 6770254798 cycles Dec 16 03:28:37.653127 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 03:28:37.653136 kernel: tsc: Detected 2500.006 MHz processor Dec 16 03:28:37.653147 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 03:28:37.653156 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 03:28:37.653164 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 16 03:28:37.653173 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 03:28:37.653182 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 03:28:37.653195 kernel: Using GB pages for direct mapping Dec 16 03:28:37.653206 kernel: ACPI: Early table checksum verification disabled Dec 16 03:28:37.653215 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Dec 16 03:28:37.653224 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Dec 16 03:28:37.653261 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 16 03:28:37.653270 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 16 03:28:37.653279 kernel: ACPI: FACS 0x00000000789D0000 000040 Dec 16 03:28:37.653291 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 16 03:28:37.653300 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 16 03:28:37.653309 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 16 03:28:37.653318 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 16 03:28:37.653327 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 16 03:28:37.653336 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 16 03:28:37.653345 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 16 03:28:37.653356 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Dec 16 03:28:37.653365 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Dec 16 03:28:37.653374 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Dec 16 03:28:37.653383 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Dec 16 03:28:37.653392 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Dec 16 03:28:37.653401 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Dec 16 03:28:37.653410 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Dec 16 03:28:37.653421 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Dec 16 03:28:37.653430 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Dec 16 03:28:37.653439 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Dec 16 03:28:37.653448 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Dec 16 03:28:37.653457 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Dec 16 03:28:37.653466 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 16 03:28:37.653474 kernel: NUMA: Initialized distance table, cnt=1 Dec 16 03:28:37.653486 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Dec 16 03:28:37.653495 kernel: Zone ranges: Dec 16 03:28:37.653504 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 03:28:37.653512 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Dec 16 03:28:37.653521 kernel: Normal empty Dec 16 03:28:37.653530 kernel: Device empty Dec 16 03:28:37.653539 kernel: Movable zone start for each node Dec 16 03:28:37.653548 kernel: Early memory node ranges Dec 16 03:28:37.653559 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 03:28:37.653568 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Dec 16 03:28:37.653577 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Dec 16 03:28:37.653586 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Dec 16 03:28:37.653595 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 03:28:37.653604 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 03:28:37.653613 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 16 03:28:37.653625 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Dec 16 03:28:37.653634 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 16 03:28:37.653643 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 03:28:37.653652 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 16 03:28:37.653661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 03:28:37.653670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 03:28:37.653679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 03:28:37.653688 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 03:28:37.653700 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 03:28:37.653708 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 03:28:37.653717 kernel: TSC deadline timer available Dec 16 03:28:37.653726 kernel: CPU topo: Max. logical packages: 1 Dec 16 03:28:37.653735 kernel: CPU topo: Max. logical dies: 1 Dec 16 03:28:37.653744 kernel: CPU topo: Max. dies per package: 1 Dec 16 03:28:37.653752 kernel: CPU topo: Max. threads per core: 2 Dec 16 03:28:37.653764 kernel: CPU topo: Num. cores per package: 1 Dec 16 03:28:37.653773 kernel: CPU topo: Num. threads per package: 2 Dec 16 03:28:37.653782 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 03:28:37.653791 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 03:28:37.653800 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Dec 16 03:28:37.653809 kernel: Booting paravirtualized kernel on KVM Dec 16 03:28:37.653818 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 03:28:37.653827 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 03:28:37.653840 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 03:28:37.653849 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 03:28:37.653858 kernel: pcpu-alloc: [0] 0 1 Dec 16 03:28:37.653867 kernel: kvm-guest: PV spinlocks enabled Dec 16 03:28:37.653876 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 03:28:37.653886 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:28:37.653898 kernel: random: crng init done Dec 16 03:28:37.653907 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 03:28:37.653916 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 03:28:37.653925 kernel: Fallback order for Node 0: 0 Dec 16 03:28:37.653934 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Dec 16 03:28:37.653944 kernel: Policy zone: DMA32 Dec 16 03:28:37.653964 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 03:28:37.653973 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 03:28:37.653982 kernel: Kernel/User page tables isolation: enabled Dec 16 03:28:37.653994 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 03:28:37.654004 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 03:28:37.654013 kernel: Dynamic Preempt: voluntary Dec 16 03:28:37.654022 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 03:28:37.654033 kernel: rcu: RCU event tracing is enabled. Dec 16 03:28:37.654043 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 03:28:37.654052 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 03:28:37.654064 kernel: Rude variant of Tasks RCU enabled. Dec 16 03:28:37.654074 kernel: Tracing variant of Tasks RCU enabled. Dec 16 03:28:37.654083 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 03:28:37.654092 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 03:28:37.654102 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:28:37.654114 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:28:37.654123 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:28:37.654133 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 03:28:37.654142 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 03:28:37.654152 kernel: Console: colour dummy device 80x25 Dec 16 03:28:37.654161 kernel: printk: legacy console [tty0] enabled Dec 16 03:28:37.654170 kernel: printk: legacy console [ttyS0] enabled Dec 16 03:28:37.654183 kernel: ACPI: Core revision 20240827 Dec 16 03:28:37.654192 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 16 03:28:37.654202 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 03:28:37.654211 kernel: x2apic enabled Dec 16 03:28:37.654221 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 03:28:37.654855 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Dec 16 03:28:37.654873 kernel: Calibrating delay loop (skipped) preset value.. 5000.01 BogoMIPS (lpj=2500006) Dec 16 03:28:37.654888 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 16 03:28:37.654898 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Dec 16 03:28:37.654907 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 03:28:37.654917 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 03:28:37.654926 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 03:28:37.655024 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 03:28:37.655034 kernel: RETBleed: Vulnerable Dec 16 03:28:37.655044 kernel: Speculative Store Bypass: Vulnerable Dec 16 03:28:37.655053 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 03:28:37.655062 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 03:28:37.655093 kernel: GDS: Unknown: Dependent on hypervisor status Dec 16 03:28:37.655102 kernel: active return thunk: its_return_thunk Dec 16 03:28:37.655112 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 03:28:37.655121 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 03:28:37.655130 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 03:28:37.655140 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 03:28:37.655149 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 16 03:28:37.655159 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 16 03:28:37.655192 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 03:28:37.655204 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 03:28:37.655214 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 03:28:37.655223 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 16 03:28:37.655244 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 03:28:37.655254 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 16 03:28:37.655263 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 16 03:28:37.655272 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 16 03:28:37.655282 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 16 03:28:37.655291 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 16 03:28:37.655300 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 16 03:28:37.655310 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 16 03:28:37.655322 kernel: Freeing SMP alternatives memory: 32K Dec 16 03:28:37.655332 kernel: pid_max: default: 32768 minimum: 301 Dec 16 03:28:37.655342 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 03:28:37.655351 kernel: landlock: Up and running. Dec 16 03:28:37.655360 kernel: SELinux: Initializing. Dec 16 03:28:37.655369 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 03:28:37.655379 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 03:28:37.655389 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 16 03:28:37.655398 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 16 03:28:37.655408 kernel: signal: max sigframe size: 3632 Dec 16 03:28:37.655421 kernel: rcu: Hierarchical SRCU implementation. Dec 16 03:28:37.655432 kernel: rcu: Max phase no-delay instances is 400. Dec 16 03:28:37.655442 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 03:28:37.655452 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 03:28:37.655461 kernel: smp: Bringing up secondary CPUs ... Dec 16 03:28:37.655471 kernel: smpboot: x86: Booting SMP configuration: Dec 16 03:28:37.655481 kernel: .... node #0, CPUs: #1 Dec 16 03:28:37.655492 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 16 03:28:37.655505 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 16 03:28:37.655514 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 03:28:37.655524 kernel: smpboot: Total of 2 processors activated (10000.02 BogoMIPS) Dec 16 03:28:37.655534 kernel: Memory: 1924436K/2037804K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15556K init, 2484K bss, 108804K reserved, 0K cma-reserved) Dec 16 03:28:37.655544 kernel: devtmpfs: initialized Dec 16 03:28:37.655554 kernel: x86/mm: Memory block size: 128MB Dec 16 03:28:37.655566 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Dec 16 03:28:37.655576 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 03:28:37.655586 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 03:28:37.655596 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 03:28:37.655605 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 03:28:37.655615 kernel: audit: initializing netlink subsys (disabled) Dec 16 03:28:37.655625 kernel: audit: type=2000 audit(1765855713.928:1): state=initialized audit_enabled=0 res=1 Dec 16 03:28:37.655637 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 03:28:37.655646 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 03:28:37.655657 kernel: cpuidle: using governor menu Dec 16 03:28:37.655667 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 03:28:37.655677 kernel: dca service started, version 1.12.1 Dec 16 03:28:37.655687 kernel: PCI: Using configuration type 1 for base access Dec 16 03:28:37.655696 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 03:28:37.655709 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 03:28:37.655718 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 03:28:37.655728 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 03:28:37.655738 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 03:28:37.655747 kernel: ACPI: Added _OSI(Module Device) Dec 16 03:28:37.655757 kernel: ACPI: Added _OSI(Processor Device) Dec 16 03:28:37.655767 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 03:28:37.655779 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 16 03:28:37.655789 kernel: ACPI: Interpreter enabled Dec 16 03:28:37.655798 kernel: ACPI: PM: (supports S0 S5) Dec 16 03:28:37.655808 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 03:28:37.655818 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 03:28:37.655827 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 03:28:37.655837 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 16 03:28:37.655847 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 03:28:37.656073 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 16 03:28:37.656213 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 16 03:28:37.656368 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 16 03:28:37.656381 kernel: acpiphp: Slot [3] registered Dec 16 03:28:37.656391 kernel: acpiphp: Slot [4] registered Dec 16 03:28:37.656405 kernel: acpiphp: Slot [5] registered Dec 16 03:28:37.656414 kernel: acpiphp: Slot [6] registered Dec 16 03:28:37.656424 kernel: acpiphp: Slot [7] registered Dec 16 03:28:37.656434 kernel: acpiphp: Slot [8] registered Dec 16 03:28:37.656443 kernel: acpiphp: Slot [9] registered Dec 16 03:28:37.656453 kernel: acpiphp: Slot [10] registered Dec 16 03:28:37.656463 kernel: acpiphp: Slot [11] registered Dec 16 03:28:37.656473 kernel: acpiphp: Slot [12] registered Dec 16 03:28:37.656485 kernel: acpiphp: Slot [13] registered Dec 16 03:28:37.656495 kernel: acpiphp: Slot [14] registered Dec 16 03:28:37.656505 kernel: acpiphp: Slot [15] registered Dec 16 03:28:37.656514 kernel: acpiphp: Slot [16] registered Dec 16 03:28:37.656524 kernel: acpiphp: Slot [17] registered Dec 16 03:28:37.656533 kernel: acpiphp: Slot [18] registered Dec 16 03:28:37.656543 kernel: acpiphp: Slot [19] registered Dec 16 03:28:37.656556 kernel: acpiphp: Slot [20] registered Dec 16 03:28:37.656565 kernel: acpiphp: Slot [21] registered Dec 16 03:28:37.656575 kernel: acpiphp: Slot [22] registered Dec 16 03:28:37.656585 kernel: acpiphp: Slot [23] registered Dec 16 03:28:37.656595 kernel: acpiphp: Slot [24] registered Dec 16 03:28:37.656605 kernel: acpiphp: Slot [25] registered Dec 16 03:28:37.656615 kernel: acpiphp: Slot [26] registered Dec 16 03:28:37.656627 kernel: acpiphp: Slot [27] registered Dec 16 03:28:37.656636 kernel: acpiphp: Slot [28] registered Dec 16 03:28:37.656646 kernel: acpiphp: Slot [29] registered Dec 16 03:28:37.656656 kernel: acpiphp: Slot [30] registered Dec 16 03:28:37.656675 kernel: acpiphp: Slot [31] registered Dec 16 03:28:37.656697 kernel: PCI host bridge to bus 0000:00 Dec 16 03:28:37.656978 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 03:28:37.657119 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 03:28:37.657252 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 03:28:37.657370 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 16 03:28:37.657485 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Dec 16 03:28:37.657601 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 03:28:37.657770 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 16 03:28:37.657913 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 16 03:28:37.658050 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Dec 16 03:28:37.658185 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 16 03:28:37.658328 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 16 03:28:37.658455 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 16 03:28:37.658587 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 16 03:28:37.658716 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 16 03:28:37.658846 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 16 03:28:37.658974 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 16 03:28:37.659108 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 03:28:37.659260 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Dec 16 03:28:37.659397 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 03:28:37.659525 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 03:28:37.659660 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Dec 16 03:28:37.659794 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Dec 16 03:28:37.659994 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Dec 16 03:28:37.660270 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Dec 16 03:28:37.660284 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 03:28:37.660295 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 03:28:37.660305 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 03:28:37.660314 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 03:28:37.660324 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 16 03:28:37.660334 kernel: iommu: Default domain type: Translated Dec 16 03:28:37.660349 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 03:28:37.660359 kernel: efivars: Registered efivars operations Dec 16 03:28:37.660368 kernel: PCI: Using ACPI for IRQ routing Dec 16 03:28:37.660378 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 03:28:37.660388 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Dec 16 03:28:37.660398 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Dec 16 03:28:37.660407 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Dec 16 03:28:37.660573 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 16 03:28:37.660726 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 16 03:28:37.660881 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 03:28:37.660894 kernel: vgaarb: loaded Dec 16 03:28:37.660904 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 16 03:28:37.660914 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 16 03:28:37.660924 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 03:28:37.660940 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 03:28:37.660950 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 03:28:37.660960 kernel: pnp: PnP ACPI init Dec 16 03:28:37.660969 kernel: pnp: PnP ACPI: found 5 devices Dec 16 03:28:37.660979 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 03:28:37.660990 kernel: NET: Registered PF_INET protocol family Dec 16 03:28:37.661000 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 03:28:37.661012 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 03:28:37.661022 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 03:28:37.661032 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 03:28:37.661041 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 03:28:37.661051 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 03:28:37.661061 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 03:28:37.661071 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 03:28:37.661083 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 03:28:37.661093 kernel: NET: Registered PF_XDP protocol family Dec 16 03:28:37.661219 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 03:28:37.668107 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 03:28:37.668278 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 03:28:37.668401 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 16 03:28:37.668521 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Dec 16 03:28:37.668683 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 03:28:37.668702 kernel: PCI: CLS 0 bytes, default 64 Dec 16 03:28:37.668717 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 03:28:37.668732 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Dec 16 03:28:37.668744 kernel: clocksource: Switched to clocksource tsc Dec 16 03:28:37.668754 kernel: Initialise system trusted keyrings Dec 16 03:28:37.668768 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 03:28:37.668778 kernel: Key type asymmetric registered Dec 16 03:28:37.668788 kernel: Asymmetric key parser 'x509' registered Dec 16 03:28:37.668798 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 03:28:37.668809 kernel: io scheduler mq-deadline registered Dec 16 03:28:37.668818 kernel: io scheduler kyber registered Dec 16 03:28:37.668829 kernel: io scheduler bfq registered Dec 16 03:28:37.668841 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 03:28:37.668851 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 03:28:37.668861 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 03:28:37.668871 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 03:28:37.668881 kernel: i8042: Warning: Keylock active Dec 16 03:28:37.668890 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 03:28:37.668900 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 03:28:37.669058 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 16 03:28:37.669204 kernel: rtc_cmos 00:00: registered as rtc0 Dec 16 03:28:37.669339 kernel: rtc_cmos 00:00: setting system clock to 2025-12-16T03:28:34 UTC (1765855714) Dec 16 03:28:37.669461 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 16 03:28:37.669493 kernel: intel_pstate: CPU model not supported Dec 16 03:28:37.669505 kernel: efifb: probing for efifb Dec 16 03:28:37.669516 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Dec 16 03:28:37.669528 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Dec 16 03:28:37.669538 kernel: efifb: scrolling: redraw Dec 16 03:28:37.669549 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 03:28:37.669559 kernel: Console: switching to colour frame buffer device 100x37 Dec 16 03:28:37.669569 kernel: fb0: EFI VGA frame buffer device Dec 16 03:28:37.669580 kernel: pstore: Using crash dump compression: deflate Dec 16 03:28:37.669590 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 03:28:37.669603 kernel: NET: Registered PF_INET6 protocol family Dec 16 03:28:37.669613 kernel: Segment Routing with IPv6 Dec 16 03:28:37.669623 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 03:28:37.669633 kernel: NET: Registered PF_PACKET protocol family Dec 16 03:28:37.669643 kernel: Key type dns_resolver registered Dec 16 03:28:37.669656 kernel: IPI shorthand broadcast: enabled Dec 16 03:28:37.669666 kernel: sched_clock: Marking stable (1379003322, 145576237)->(1616855175, -92275616) Dec 16 03:28:37.669679 kernel: registered taskstats version 1 Dec 16 03:28:37.669689 kernel: Loading compiled-in X.509 certificates Dec 16 03:28:37.669699 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: aafd1eb27ea805b8231c3bede9210239fae84df8' Dec 16 03:28:37.669709 kernel: Demotion targets for Node 0: null Dec 16 03:28:37.669719 kernel: Key type .fscrypt registered Dec 16 03:28:37.669730 kernel: Key type fscrypt-provisioning registered Dec 16 03:28:37.669740 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 03:28:37.669753 kernel: ima: Allocated hash algorithm: sha1 Dec 16 03:28:37.669763 kernel: ima: No architecture policies found Dec 16 03:28:37.669773 kernel: clk: Disabling unused clocks Dec 16 03:28:37.669783 kernel: Freeing unused kernel image (initmem) memory: 15556K Dec 16 03:28:37.669794 kernel: Write protecting the kernel read-only data: 47104k Dec 16 03:28:37.669808 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 16 03:28:37.669818 kernel: Run /init as init process Dec 16 03:28:37.669829 kernel: with arguments: Dec 16 03:28:37.669840 kernel: /init Dec 16 03:28:37.669849 kernel: with environment: Dec 16 03:28:37.669860 kernel: HOME=/ Dec 16 03:28:37.669870 kernel: TERM=linux Dec 16 03:28:37.669986 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 16 03:28:37.670003 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 16 03:28:37.670092 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 03:28:37.670106 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 03:28:37.670117 kernel: GPT:25804799 != 33554431 Dec 16 03:28:37.670127 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 03:28:37.670139 kernel: GPT:25804799 != 33554431 Dec 16 03:28:37.670149 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 03:28:37.670159 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 03:28:37.670170 kernel: SCSI subsystem initialized Dec 16 03:28:37.670180 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 03:28:37.670190 kernel: device-mapper: uevent: version 1.0.3 Dec 16 03:28:37.670203 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 03:28:37.670214 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 03:28:37.670224 kernel: raid6: avx512x4 gen() 17749 MB/s Dec 16 03:28:37.670251 kernel: raid6: avx512x2 gen() 17721 MB/s Dec 16 03:28:37.670262 kernel: raid6: avx512x1 gen() 17296 MB/s Dec 16 03:28:37.670272 kernel: raid6: avx2x4 gen() 16057 MB/s Dec 16 03:28:37.670282 kernel: raid6: avx2x2 gen() 17362 MB/s Dec 16 03:28:37.670292 kernel: raid6: avx2x1 gen() 13551 MB/s Dec 16 03:28:37.670305 kernel: raid6: using algorithm avx512x4 gen() 17749 MB/s Dec 16 03:28:37.670315 kernel: raid6: .... xor() 7239 MB/s, rmw enabled Dec 16 03:28:37.670325 kernel: raid6: using avx512x2 recovery algorithm Dec 16 03:28:37.670335 kernel: xor: automatically using best checksumming function avx Dec 16 03:28:37.670346 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 03:28:37.670356 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 03:28:37.670366 kernel: BTRFS: device fsid 57a8262f-2900-48ba-a17e-aafbd70d59c7 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (152) Dec 16 03:28:37.670379 kernel: BTRFS info (device dm-0): first mount of filesystem 57a8262f-2900-48ba-a17e-aafbd70d59c7 Dec 16 03:28:37.670389 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:28:37.670399 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 03:28:37.670410 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 03:28:37.670420 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 03:28:37.670431 kernel: loop: module loaded Dec 16 03:28:37.670441 kernel: loop0: detected capacity change from 0 to 100528 Dec 16 03:28:37.670455 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 03:28:37.670468 systemd[1]: Successfully made /usr/ read-only. Dec 16 03:28:37.670482 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:28:37.670493 systemd[1]: Detected virtualization amazon. Dec 16 03:28:37.670503 systemd[1]: Detected architecture x86-64. Dec 16 03:28:37.670513 systemd[1]: Running in initrd. Dec 16 03:28:37.670527 systemd[1]: No hostname configured, using default hostname. Dec 16 03:28:37.670537 systemd[1]: Hostname set to . Dec 16 03:28:37.670548 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:28:37.670559 systemd[1]: Queued start job for default target initrd.target. Dec 16 03:28:37.670570 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:28:37.670580 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:28:37.670594 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:28:37.670605 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 03:28:37.670616 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:28:37.670627 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 03:28:37.670638 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 03:28:37.670651 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:28:37.670662 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:28:37.670673 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:28:37.670684 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:28:37.670694 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:28:37.670705 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:28:37.670715 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:28:37.670728 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:28:37.670739 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:28:37.670750 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:28:37.670760 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 03:28:37.670771 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 03:28:37.670782 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:28:37.670792 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:28:37.670805 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:28:37.670816 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:28:37.670827 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 03:28:37.670837 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 03:28:37.670848 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:28:37.670859 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 03:28:37.670870 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 03:28:37.670883 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 03:28:37.670894 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:28:37.670904 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:28:37.670915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:28:37.670928 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 03:28:37.670939 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:28:37.670951 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 03:28:37.670961 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:28:37.670996 systemd-journald[287]: Collecting audit messages is enabled. Dec 16 03:28:37.671022 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:28:37.671035 kernel: audit: type=1130 audit(1765855717.659:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.671045 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 03:28:37.671056 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:28:37.671071 systemd-journald[287]: Journal started Dec 16 03:28:37.671093 systemd-journald[287]: Runtime Journal (/run/log/journal/ec2a54a4dda5d9830a7861cf2f0e456e) is 4.7M, max 38M, 33.2M free. Dec 16 03:28:37.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.673267 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:28:37.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.678260 kernel: audit: type=1130 audit(1765855717.672:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.680057 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:28:37.690757 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:28:37.699162 kernel: audit: type=1130 audit(1765855717.691:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.699867 systemd-tmpfiles[305]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 03:28:37.706907 kernel: Bridge firewalling registered Dec 16 03:28:37.706022 systemd-modules-load[291]: Inserted module 'br_netfilter' Dec 16 03:28:37.707553 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:28:37.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.713095 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:28:37.717010 kernel: audit: type=1130 audit(1765855717.708:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.718389 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:28:37.724935 kernel: audit: type=1130 audit(1765855717.717:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.760621 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:28:37.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.766258 kernel: audit: type=1130 audit(1765855717.760:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.766279 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:28:37.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.771454 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 03:28:37.775138 kernel: audit: type=1130 audit(1765855717.765:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.775000 audit: BPF prog-id=6 op=LOAD Dec 16 03:28:37.778273 kernel: audit: type=1334 audit(1765855717.775:9): prog-id=6 op=LOAD Dec 16 03:28:37.779031 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:28:37.798869 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:28:37.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.805426 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 03:28:37.806999 kernel: audit: type=1130 audit(1765855717.799:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.834788 dracut-cmdline[330]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:28:37.898258 systemd-resolved[318]: Positive Trust Anchors: Dec 16 03:28:37.898274 systemd-resolved[318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:28:37.898278 systemd-resolved[318]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:28:37.898315 systemd-resolved[318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:28:37.927059 systemd-resolved[318]: Defaulting to hostname 'linux'. Dec 16 03:28:37.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:37.929870 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:28:37.930717 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:28:38.067290 kernel: Loading iSCSI transport class v2.0-870. Dec 16 03:28:38.150261 kernel: iscsi: registered transport (tcp) Dec 16 03:28:38.210273 kernel: iscsi: registered transport (qla4xxx) Dec 16 03:28:38.210359 kernel: QLogic iSCSI HBA Driver Dec 16 03:28:38.240070 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:28:38.257848 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:28:38.266349 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:28:38.266417 kernel: audit: type=1130 audit(1765855718.258:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.261496 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:28:38.310359 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 03:28:38.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.313141 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 03:28:38.318933 kernel: audit: type=1130 audit(1765855718.309:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.321377 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 03:28:38.358626 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:28:38.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.367636 kernel: audit: type=1130 audit(1765855718.358:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.359000 audit: BPF prog-id=7 op=LOAD Dec 16 03:28:38.370733 kernel: audit: type=1334 audit(1765855718.359:15): prog-id=7 op=LOAD Dec 16 03:28:38.370045 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:28:38.359000 audit: BPF prog-id=8 op=LOAD Dec 16 03:28:38.375252 kernel: audit: type=1334 audit(1765855718.359:16): prog-id=8 op=LOAD Dec 16 03:28:38.410554 systemd-udevd[577]: Using default interface naming scheme 'v257'. Dec 16 03:28:38.430848 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:28:38.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.433919 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 03:28:38.444135 kernel: audit: type=1130 audit(1765855718.430:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.464650 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:28:38.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.468416 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:28:38.477065 kernel: audit: type=1130 audit(1765855718.464:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.477113 kernel: audit: type=1334 audit(1765855718.465:19): prog-id=9 op=LOAD Dec 16 03:28:38.465000 audit: BPF prog-id=9 op=LOAD Dec 16 03:28:38.477571 dracut-pre-trigger[650]: rd.md=0: removing MD RAID activation Dec 16 03:28:38.513863 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:28:38.522586 kernel: audit: type=1130 audit(1765855718.513:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.517420 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:28:38.549147 systemd-networkd[674]: lo: Link UP Dec 16 03:28:38.549166 systemd-networkd[674]: lo: Gained carrier Dec 16 03:28:38.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.550102 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:28:38.560441 kernel: audit: type=1130 audit(1765855718.549:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.551043 systemd[1]: Reached target network.target - Network. Dec 16 03:28:38.600349 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:28:38.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.605612 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 03:28:38.764399 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:28:38.764635 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:28:38.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.769646 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:28:38.773341 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:28:38.779270 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 16 03:28:38.779633 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 16 03:28:38.783293 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 16 03:28:38.790253 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:78:87:02:b4:7f Dec 16 03:28:38.790360 (udev-worker)[713]: Network interface NamePolicy= disabled on kernel command line. Dec 16 03:28:38.796648 systemd-networkd[674]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:28:38.796750 systemd-networkd[674]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:28:38.801315 systemd-networkd[674]: eth0: Link UP Dec 16 03:28:38.801443 systemd-networkd[674]: eth0: Gained carrier Dec 16 03:28:38.801458 systemd-networkd[674]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:28:38.812369 systemd-networkd[674]: eth0: DHCPv4 address 172.31.30.117/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 03:28:38.816003 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:28:38.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:38.870413 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 03:28:38.878265 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 03:28:38.883270 kernel: nvme nvme0: using unchecked data buffer Dec 16 03:28:38.900276 kernel: AES CTR mode by8 optimization enabled Dec 16 03:28:38.986141 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 16 03:28:38.989436 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 03:28:39.014289 disk-uuid[829]: Primary Header is updated. Dec 16 03:28:39.014289 disk-uuid[829]: Secondary Entries is updated. Dec 16 03:28:39.014289 disk-uuid[829]: Secondary Header is updated. Dec 16 03:28:39.097212 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 16 03:28:39.143306 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 03:28:39.164252 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 16 03:28:39.309296 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 03:28:39.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:39.310960 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:28:39.311577 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:28:39.312891 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:28:39.314933 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 03:28:39.341099 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:28:39.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:40.173082 disk-uuid[830]: Warning: The kernel is still using the old partition table. Dec 16 03:28:40.173082 disk-uuid[830]: The new table will be used at the next reboot or after you Dec 16 03:28:40.173082 disk-uuid[830]: run partprobe(8) or kpartx(8) Dec 16 03:28:40.173082 disk-uuid[830]: The operation has completed successfully. Dec 16 03:28:40.182421 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 03:28:40.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:40.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:40.182581 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 03:28:40.184591 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 03:28:40.226261 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (990) Dec 16 03:28:40.230281 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:28:40.230350 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:28:40.270062 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 03:28:40.270135 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 03:28:40.278261 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:28:40.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:40.279140 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 03:28:40.282018 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 03:28:40.496491 systemd-networkd[674]: eth0: Gained IPv6LL Dec 16 03:28:41.568346 ignition[1009]: Ignition 2.24.0 Dec 16 03:28:41.568363 ignition[1009]: Stage: fetch-offline Dec 16 03:28:41.568504 ignition[1009]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:28:41.568519 ignition[1009]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 03:28:41.572049 ignition[1009]: Ignition finished successfully Dec 16 03:28:41.573551 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:28:41.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:41.575911 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 03:28:41.605881 ignition[1015]: Ignition 2.24.0 Dec 16 03:28:41.605899 ignition[1015]: Stage: fetch Dec 16 03:28:41.606167 ignition[1015]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:28:41.606198 ignition[1015]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 03:28:41.606320 ignition[1015]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 03:28:41.657666 ignition[1015]: PUT result: OK Dec 16 03:28:41.660364 ignition[1015]: parsed url from cmdline: "" Dec 16 03:28:41.660373 ignition[1015]: no config URL provided Dec 16 03:28:41.660381 ignition[1015]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:28:41.660397 ignition[1015]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:28:41.660420 ignition[1015]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 03:28:41.662112 ignition[1015]: PUT result: OK Dec 16 03:28:41.662177 ignition[1015]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 16 03:28:41.664226 ignition[1015]: GET result: OK Dec 16 03:28:41.664326 ignition[1015]: parsing config with SHA512: dd70696cbcad29ba7233baa3df7bee958b37ae28d7a1efa5468dbfe135e29a84e215ec6eb0a99be1fa40dc09078281d895498b21d0320c0679660630b8bb1b09 Dec 16 03:28:41.670550 unknown[1015]: fetched base config from "system" Dec 16 03:28:41.670570 unknown[1015]: fetched base config from "system" Dec 16 03:28:41.671099 ignition[1015]: fetch: fetch complete Dec 16 03:28:41.670578 unknown[1015]: fetched user config from "aws" Dec 16 03:28:41.671107 ignition[1015]: fetch: fetch passed Dec 16 03:28:41.671172 ignition[1015]: Ignition finished successfully Dec 16 03:28:41.674547 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 03:28:41.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:41.676477 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 03:28:41.708558 ignition[1021]: Ignition 2.24.0 Dec 16 03:28:41.708575 ignition[1021]: Stage: kargs Dec 16 03:28:41.708854 ignition[1021]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:28:41.708867 ignition[1021]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 03:28:41.708993 ignition[1021]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 03:28:41.709926 ignition[1021]: PUT result: OK Dec 16 03:28:41.713710 ignition[1021]: kargs: kargs passed Dec 16 03:28:41.715000 ignition[1021]: Ignition finished successfully Dec 16 03:28:41.717260 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 03:28:41.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:41.718881 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 03:28:41.763507 ignition[1027]: Ignition 2.24.0 Dec 16 03:28:41.763523 ignition[1027]: Stage: disks Dec 16 03:28:41.763810 ignition[1027]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:28:41.763823 ignition[1027]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 03:28:41.763926 ignition[1027]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 03:28:41.764905 ignition[1027]: PUT result: OK Dec 16 03:28:41.769439 ignition[1027]: disks: disks passed Dec 16 03:28:41.770039 ignition[1027]: Ignition finished successfully Dec 16 03:28:41.772257 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 03:28:41.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:41.773012 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 03:28:41.773472 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 03:28:41.774024 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:28:41.774624 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:28:41.775188 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:28:41.777080 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 03:28:41.889006 systemd-fsck[1036]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 16 03:28:41.891580 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 03:28:41.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:41.894136 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 03:28:42.145254 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 1314c107-11a5-486b-9d52-be9f57b6bf1b r/w with ordered data mode. Quota mode: none. Dec 16 03:28:42.146212 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 03:28:42.147147 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 03:28:42.200994 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:28:42.204347 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 03:28:42.206338 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 03:28:42.207073 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 03:28:42.207104 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:28:42.212434 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 03:28:42.214866 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 03:28:42.228282 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1055) Dec 16 03:28:42.231695 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:28:42.231746 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:28:42.243254 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 03:28:42.243324 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 03:28:42.245315 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:28:44.413896 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 03:28:44.422416 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 16 03:28:44.422454 kernel: audit: type=1130 audit(1765855724.413:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:44.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:44.417367 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 03:28:44.433491 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 03:28:44.445273 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 03:28:44.447284 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:28:44.484917 ignition[1151]: INFO : Ignition 2.24.0 Dec 16 03:28:44.486421 ignition[1151]: INFO : Stage: mount Dec 16 03:28:44.486421 ignition[1151]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:28:44.486421 ignition[1151]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 03:28:44.486421 ignition[1151]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 03:28:44.490656 ignition[1151]: INFO : PUT result: OK Dec 16 03:28:44.491936 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 03:28:44.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:44.498157 ignition[1151]: INFO : mount: mount passed Dec 16 03:28:44.498157 ignition[1151]: INFO : Ignition finished successfully Dec 16 03:28:44.499402 kernel: audit: type=1130 audit(1765855724.492:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:44.499958 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 03:28:44.506371 kernel: audit: type=1130 audit(1765855724.499:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:44.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:44.504395 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 03:28:44.523789 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:28:44.563259 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1163) Dec 16 03:28:44.567489 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:28:44.567573 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:28:44.575010 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 03:28:44.575088 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 03:28:44.577389 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:28:44.617437 ignition[1179]: INFO : Ignition 2.24.0 Dec 16 03:28:44.617437 ignition[1179]: INFO : Stage: files Dec 16 03:28:44.618849 ignition[1179]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:28:44.618849 ignition[1179]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 03:28:44.618849 ignition[1179]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 03:28:44.620746 ignition[1179]: INFO : PUT result: OK Dec 16 03:28:44.626673 ignition[1179]: DEBUG : files: compiled without relabeling support, skipping Dec 16 03:28:44.629266 ignition[1179]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 03:28:44.629266 ignition[1179]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 03:28:44.669653 ignition[1179]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 03:28:44.670616 ignition[1179]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 03:28:44.670616 ignition[1179]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 03:28:44.670006 unknown[1179]: wrote ssh authorized keys file for user: core Dec 16 03:28:44.707694 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 03:28:44.709027 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 03:28:44.778283 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 03:28:44.944067 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 03:28:44.944067 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 03:28:44.946647 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 03:28:44.946647 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:28:44.946647 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:28:44.946647 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:28:44.946647 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:28:44.946647 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:28:44.946647 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:28:44.952176 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:28:44.952176 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:28:44.952176 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:28:44.955481 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:28:44.955481 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:28:44.955481 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 03:28:45.381765 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 03:28:45.946443 ignition[1179]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:28:45.946443 ignition[1179]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 03:28:45.976992 ignition[1179]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:28:45.983536 ignition[1179]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:28:45.983536 ignition[1179]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 03:28:45.983536 ignition[1179]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 03:28:45.993876 kernel: audit: type=1130 audit(1765855725.985:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:45.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:45.993987 ignition[1179]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 03:28:45.993987 ignition[1179]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:28:45.993987 ignition[1179]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:28:45.993987 ignition[1179]: INFO : files: files passed Dec 16 03:28:45.993987 ignition[1179]: INFO : Ignition finished successfully Dec 16 03:28:45.986135 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 03:28:45.989487 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 03:28:46.001302 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 03:28:46.007724 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 03:28:46.018338 kernel: audit: type=1130 audit(1765855726.007:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.018380 kernel: audit: type=1131 audit(1765855726.009:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.007825 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 03:28:46.033703 initrd-setup-root-after-ignition[1213]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:28:46.033703 initrd-setup-root-after-ignition[1213]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:28:46.036542 initrd-setup-root-after-ignition[1217]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:28:46.038757 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:28:46.044841 kernel: audit: type=1130 audit(1765855726.038:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.039526 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 03:28:46.046415 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 03:28:46.105228 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 03:28:46.105385 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 03:28:46.117837 kernel: audit: type=1130 audit(1765855726.106:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.117883 kernel: audit: type=1131 audit(1765855726.106:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.108224 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 03:28:46.118357 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 03:28:46.119538 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 03:28:46.120944 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 03:28:46.162833 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:28:46.169824 kernel: audit: type=1130 audit(1765855726.162:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.166434 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 03:28:46.199268 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:28:46.199521 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:28:46.200885 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:28:46.201792 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 03:28:46.202748 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 03:28:46.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.203005 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:28:46.204160 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 03:28:46.205267 systemd[1]: Stopped target basic.target - Basic System. Dec 16 03:28:46.206123 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 03:28:46.206926 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:28:46.207730 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 03:28:46.208539 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:28:46.209605 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 03:28:46.210394 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:28:46.211290 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 03:28:46.212428 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 03:28:46.213401 systemd[1]: Stopped target swap.target - Swaps. Dec 16 03:28:46.214155 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 03:28:46.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.214388 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:28:46.215459 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:28:46.216352 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:28:46.217251 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 03:28:46.217427 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:28:46.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.218058 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 03:28:46.218332 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 03:28:46.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.219688 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 03:28:46.219900 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:28:46.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.221009 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 03:28:46.221246 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 03:28:46.223568 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 03:28:46.227589 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 03:28:46.228328 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 03:28:46.230455 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:28:46.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.231323 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 03:28:46.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.231576 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:28:46.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.236078 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 03:28:46.236292 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:28:46.248411 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 03:28:46.248568 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 03:28:46.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.262828 ignition[1237]: INFO : Ignition 2.24.0 Dec 16 03:28:46.262828 ignition[1237]: INFO : Stage: umount Dec 16 03:28:46.264365 ignition[1237]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:28:46.264365 ignition[1237]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 03:28:46.264365 ignition[1237]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 03:28:46.267256 ignition[1237]: INFO : PUT result: OK Dec 16 03:28:46.274274 ignition[1237]: INFO : umount: umount passed Dec 16 03:28:46.274274 ignition[1237]: INFO : Ignition finished successfully Dec 16 03:28:46.274177 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 03:28:46.277558 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 03:28:46.277733 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 03:28:46.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.278770 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 03:28:46.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.278845 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 03:28:46.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.279394 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 03:28:46.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.279480 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 03:28:46.280172 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 03:28:46.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.280291 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 03:28:46.281001 systemd[1]: Stopped target network.target - Network. Dec 16 03:28:46.281697 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 03:28:46.281774 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:28:46.282478 systemd[1]: Stopped target paths.target - Path Units. Dec 16 03:28:46.283095 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 03:28:46.286302 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:28:46.286720 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 03:28:46.287916 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 03:28:46.288588 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 03:28:46.288734 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:28:46.289363 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 03:28:46.289415 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:28:46.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.289995 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 03:28:46.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.290036 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:28:46.291182 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 03:28:46.291311 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 03:28:46.291918 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 03:28:46.291985 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 03:28:46.292885 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 03:28:46.293667 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 03:28:46.299504 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 03:28:46.299683 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 03:28:46.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.302000 audit: BPF prog-id=6 op=UNLOAD Dec 16 03:28:46.304198 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 03:28:46.304382 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 03:28:46.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.307000 audit: BPF prog-id=9 op=UNLOAD Dec 16 03:28:46.307754 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 03:28:46.308785 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 03:28:46.308841 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:28:46.311405 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 03:28:46.311975 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 03:28:46.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.312065 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:28:46.313435 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 03:28:46.313516 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:28:46.314108 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 03:28:46.314179 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 03:28:46.320380 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:28:46.327277 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 03:28:46.328197 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:28:46.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.332386 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 03:28:46.333135 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 03:28:46.334630 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 03:28:46.335113 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:28:46.335611 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 03:28:46.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.335690 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:28:46.336970 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 03:28:46.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.337044 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 03:28:46.338224 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 03:28:46.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.338308 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:28:46.340886 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 03:28:46.341381 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 03:28:46.341458 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:28:46.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.344906 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 03:28:46.345013 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:28:46.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.347404 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 03:28:46.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.347492 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:28:46.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.349104 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 03:28:46.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.349188 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:28:46.349917 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:28:46.349989 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:28:46.363324 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 03:28:46.366653 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 03:28:46.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.373972 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 03:28:46.374146 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 03:28:46.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.427749 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 03:28:46.427893 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 03:28:46.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.429998 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 03:28:46.430627 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 03:28:46.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:46.430736 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 03:28:46.433514 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 03:28:46.466388 systemd[1]: Switching root. Dec 16 03:28:46.501670 systemd-journald[287]: Journal stopped Dec 16 03:28:49.402481 systemd-journald[287]: Received SIGTERM from PID 1 (systemd). Dec 16 03:28:49.402577 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 03:28:49.402613 kernel: SELinux: policy capability open_perms=1 Dec 16 03:28:49.402650 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 03:28:49.402672 kernel: SELinux: policy capability always_check_network=0 Dec 16 03:28:49.402700 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 03:28:49.402723 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 03:28:49.402744 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 03:28:49.402770 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 03:28:49.402791 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 03:28:49.402818 systemd[1]: Successfully loaded SELinux policy in 171.693ms. Dec 16 03:28:49.402848 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.311ms. Dec 16 03:28:49.402873 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:28:49.402897 systemd[1]: Detected virtualization amazon. Dec 16 03:28:49.402919 systemd[1]: Detected architecture x86-64. Dec 16 03:28:49.402941 systemd[1]: Detected first boot. Dec 16 03:28:49.402963 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:28:49.402990 zram_generator::config[1280]: No configuration found. Dec 16 03:28:49.403013 kernel: Guest personality initialized and is inactive Dec 16 03:28:49.403034 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 03:28:49.403055 kernel: Initialized host personality Dec 16 03:28:49.403078 kernel: NET: Registered PF_VSOCK protocol family Dec 16 03:28:49.403106 systemd[1]: Populated /etc with preset unit settings. Dec 16 03:28:49.403128 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 03:28:49.403154 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 03:28:49.403176 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 03:28:49.403206 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 03:28:49.411282 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 03:28:49.411361 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 03:28:49.411389 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 03:28:49.411417 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 03:28:49.411455 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 03:28:49.411481 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 03:28:49.411509 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 03:28:49.411535 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:28:49.411561 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:28:49.411582 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 03:28:49.411604 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 03:28:49.411634 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 03:28:49.411659 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:28:49.411687 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 03:28:49.411712 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:28:49.411738 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:28:49.411765 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 03:28:49.411796 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 03:28:49.411822 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 03:28:49.411848 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 03:28:49.411874 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:28:49.411900 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:28:49.411928 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 03:28:49.411955 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:28:49.411984 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:28:49.412011 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 03:28:49.412037 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 03:28:49.412063 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 03:28:49.412090 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:28:49.412115 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 03:28:49.412142 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:28:49.412171 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 03:28:49.412197 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 03:28:49.412223 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:28:49.412282 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:28:49.412310 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 03:28:49.412333 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 03:28:49.412361 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 03:28:49.412391 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 03:28:49.412415 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:28:49.412442 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 03:28:49.412469 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 03:28:49.412495 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 03:28:49.412522 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 03:28:49.412547 systemd[1]: Reached target machines.target - Containers. Dec 16 03:28:49.412576 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 03:28:49.412622 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:28:49.412650 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:28:49.412676 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 03:28:49.412700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:28:49.412727 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:28:49.412754 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:28:49.412785 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 03:28:49.412811 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:28:49.412835 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 03:28:49.412862 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 03:28:49.412890 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 03:28:49.412929 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 03:28:49.412960 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 03:28:49.412989 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:28:49.413017 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:28:49.413042 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:28:49.413069 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:28:49.413096 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 03:28:49.413123 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 03:28:49.413151 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:28:49.413179 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:28:49.413208 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 03:28:49.427140 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 03:28:49.427189 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 03:28:49.427211 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 03:28:49.427253 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 03:28:49.427276 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 03:28:49.427303 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:28:49.427327 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:28:49.427349 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:28:49.427370 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:28:49.427395 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:28:49.427416 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:28:49.427438 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:28:49.427460 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:28:49.427484 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 03:28:49.427507 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:28:49.427529 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 03:28:49.427554 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 03:28:49.427576 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:28:49.427600 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 03:28:49.427624 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:28:49.427648 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:28:49.427711 systemd-journald[1358]: Collecting audit messages is enabled. Dec 16 03:28:49.427759 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 03:28:49.427783 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:28:49.427807 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 03:28:49.427833 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:28:49.427857 systemd-journald[1358]: Journal started Dec 16 03:28:49.427898 systemd-journald[1358]: Runtime Journal (/run/log/journal/ec2a54a4dda5d9830a7861cf2f0e456e) is 4.7M, max 38M, 33.2M free. Dec 16 03:28:49.074000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 03:28:49.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.267000 audit: BPF prog-id=14 op=UNLOAD Dec 16 03:28:49.267000 audit: BPF prog-id=13 op=UNLOAD Dec 16 03:28:49.432295 kernel: fuse: init (API version 7.41) Dec 16 03:28:49.268000 audit: BPF prog-id=15 op=LOAD Dec 16 03:28:49.268000 audit: BPF prog-id=16 op=LOAD Dec 16 03:28:49.268000 audit: BPF prog-id=17 op=LOAD Dec 16 03:28:49.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.397000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 03:28:49.397000 audit[1358]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffec3390260 a2=4000 a3=0 items=0 ppid=1 pid=1358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:49.397000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 03:28:48.983783 systemd[1]: Queued start job for default target multi-user.target. Dec 16 03:28:49.003580 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 03:28:49.004132 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 03:28:49.437361 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 03:28:49.444289 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:28:49.452662 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:28:49.460294 kernel: kauditd_printk_skb: 74 callbacks suppressed Dec 16 03:28:49.460384 kernel: audit: type=1130 audit(1765855729.451:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.455702 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 03:28:49.455950 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 03:28:49.461512 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 03:28:49.461750 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 03:28:49.463130 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:28:49.471264 kernel: audit: type=1130 audit(1765855729.459:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.473766 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 03:28:49.481316 kernel: audit: type=1131 audit(1765855729.459:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.494899 kernel: audit: type=1130 audit(1765855729.461:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.506247 kernel: audit: type=1131 audit(1765855729.461:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.512342 kernel: audit: type=1130 audit(1765855729.462:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.508208 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 03:28:49.526267 kernel: audit: type=1130 audit(1765855729.474:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.526365 kernel: audit: type=1130 audit(1765855729.511:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.523583 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 03:28:49.537282 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 03:28:49.545443 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 03:28:49.549687 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:28:49.575463 kernel: loop1: detected capacity change from 0 to 111560 Dec 16 03:28:49.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.581960 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 03:28:49.589264 kernel: audit: type=1130 audit(1765855729.582:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.603405 systemd-journald[1358]: Time spent on flushing to /var/log/journal/ec2a54a4dda5d9830a7861cf2f0e456e is 68.719ms for 1152 entries. Dec 16 03:28:49.603405 systemd-journald[1358]: System Journal (/var/log/journal/ec2a54a4dda5d9830a7861cf2f0e456e) is 8M, max 588.1M, 580.1M free. Dec 16 03:28:49.704157 systemd-journald[1358]: Received client request to flush runtime journal. Dec 16 03:28:49.704269 kernel: ACPI: bus type drm_connector registered Dec 16 03:28:49.704316 kernel: audit: type=1130 audit(1765855729.619:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.602512 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Dec 16 03:28:49.602541 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Dec 16 03:28:49.617428 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:28:49.618535 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:28:49.629197 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:28:49.670851 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:28:49.693497 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:28:49.708143 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 03:28:49.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.726974 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 03:28:49.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.730454 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 03:28:49.785922 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 03:28:49.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.787000 audit: BPF prog-id=18 op=LOAD Dec 16 03:28:49.787000 audit: BPF prog-id=19 op=LOAD Dec 16 03:28:49.787000 audit: BPF prog-id=20 op=LOAD Dec 16 03:28:49.792475 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 03:28:49.797000 audit: BPF prog-id=21 op=LOAD Dec 16 03:28:49.801467 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:28:49.807517 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:28:49.815000 audit: BPF prog-id=22 op=LOAD Dec 16 03:28:49.815000 audit: BPF prog-id=23 op=LOAD Dec 16 03:28:49.815000 audit: BPF prog-id=24 op=LOAD Dec 16 03:28:49.818527 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 03:28:49.820000 audit: BPF prog-id=25 op=LOAD Dec 16 03:28:49.820000 audit: BPF prog-id=26 op=LOAD Dec 16 03:28:49.820000 audit: BPF prog-id=27 op=LOAD Dec 16 03:28:49.825351 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 03:28:49.869389 systemd-tmpfiles[1431]: ACLs are not supported, ignoring. Dec 16 03:28:49.869422 systemd-tmpfiles[1431]: ACLs are not supported, ignoring. Dec 16 03:28:49.876413 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:28:49.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.926469 systemd-nsresourced[1432]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 03:28:49.941807 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 03:28:49.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:49.948264 kernel: loop2: detected capacity change from 0 to 73176 Dec 16 03:28:49.955612 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 03:28:49.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:50.006096 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 03:28:50.011391 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 03:28:50.018377 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 03:28:50.034975 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 03:28:50.045642 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 03:28:50.120190 systemd-oomd[1429]: No swap; memory pressure usage will be degraded Dec 16 03:28:50.121647 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 03:28:50.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:50.223709 systemd-resolved[1430]: Positive Trust Anchors: Dec 16 03:28:50.223727 systemd-resolved[1430]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:28:50.223734 systemd-resolved[1430]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:28:50.223804 systemd-resolved[1430]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:28:50.233114 systemd-resolved[1430]: Defaulting to hostname 'linux'. Dec 16 03:28:50.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:50.235558 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:28:50.236648 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:28:50.368269 kernel: loop3: detected capacity change from 0 to 50784 Dec 16 03:28:50.478378 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 03:28:50.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:50.478000 audit: BPF prog-id=8 op=UNLOAD Dec 16 03:28:50.478000 audit: BPF prog-id=7 op=UNLOAD Dec 16 03:28:50.478000 audit: BPF prog-id=28 op=LOAD Dec 16 03:28:50.478000 audit: BPF prog-id=29 op=LOAD Dec 16 03:28:50.480953 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:28:50.517397 systemd-udevd[1457]: Using default interface naming scheme 'v257'. Dec 16 03:28:50.687279 kernel: loop4: detected capacity change from 0 to 224512 Dec 16 03:28:50.733543 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:28:50.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:50.734000 audit: BPF prog-id=30 op=LOAD Dec 16 03:28:50.740454 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:28:50.818096 (udev-worker)[1472]: Network interface NamePolicy= disabled on kernel command line. Dec 16 03:28:50.826059 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 03:28:50.858531 systemd-networkd[1463]: lo: Link UP Dec 16 03:28:50.858545 systemd-networkd[1463]: lo: Gained carrier Dec 16 03:28:50.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:50.859765 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:28:50.861481 systemd[1]: Reached target network.target - Network. Dec 16 03:28:50.865657 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 03:28:50.868897 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 03:28:50.903072 systemd-networkd[1463]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:28:50.903089 systemd-networkd[1463]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:28:50.908740 systemd-networkd[1463]: eth0: Link UP Dec 16 03:28:50.910459 systemd-networkd[1463]: eth0: Gained carrier Dec 16 03:28:50.911914 systemd-networkd[1463]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:28:50.921426 systemd-networkd[1463]: eth0: DHCPv4 address 172.31.30.117/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 03:28:50.930769 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 03:28:50.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:50.954262 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 03:28:50.970266 kernel: loop5: detected capacity change from 0 to 111560 Dec 16 03:28:50.977263 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 16 03:28:50.987273 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 03:28:50.991283 kernel: ACPI: button: Power Button [PWRF] Dec 16 03:28:50.991387 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 16 03:28:50.993679 kernel: ACPI: button: Sleep Button [SLPF] Dec 16 03:28:50.997255 kernel: loop6: detected capacity change from 0 to 73176 Dec 16 03:28:51.027439 kernel: loop7: detected capacity change from 0 to 50784 Dec 16 03:28:51.050254 kernel: loop1: detected capacity change from 0 to 224512 Dec 16 03:28:51.076395 (sd-merge)[1503]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Dec 16 03:28:51.088539 (sd-merge)[1503]: Merged extensions into '/usr'. Dec 16 03:28:51.128164 systemd[1]: Reload requested from client PID 1372 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 03:28:51.128184 systemd[1]: Reloading... Dec 16 03:28:51.306255 zram_generator::config[1561]: No configuration found. Dec 16 03:28:51.658421 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 03:28:51.659098 systemd[1]: Reloading finished in 530 ms. Dec 16 03:28:51.682960 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 03:28:51.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:51.741257 systemd[1]: Starting ensure-sysext.service... Dec 16 03:28:51.743534 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 03:28:51.747611 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:28:51.753459 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:28:51.755000 audit: BPF prog-id=31 op=LOAD Dec 16 03:28:51.755000 audit: BPF prog-id=25 op=UNLOAD Dec 16 03:28:51.755000 audit: BPF prog-id=32 op=LOAD Dec 16 03:28:51.755000 audit: BPF prog-id=33 op=LOAD Dec 16 03:28:51.755000 audit: BPF prog-id=26 op=UNLOAD Dec 16 03:28:51.755000 audit: BPF prog-id=27 op=UNLOAD Dec 16 03:28:51.756000 audit: BPF prog-id=34 op=LOAD Dec 16 03:28:51.756000 audit: BPF prog-id=18 op=UNLOAD Dec 16 03:28:51.756000 audit: BPF prog-id=35 op=LOAD Dec 16 03:28:51.757000 audit: BPF prog-id=36 op=LOAD Dec 16 03:28:51.757000 audit: BPF prog-id=19 op=UNLOAD Dec 16 03:28:51.757000 audit: BPF prog-id=20 op=UNLOAD Dec 16 03:28:51.764000 audit: BPF prog-id=37 op=LOAD Dec 16 03:28:51.764000 audit: BPF prog-id=22 op=UNLOAD Dec 16 03:28:51.764000 audit: BPF prog-id=38 op=LOAD Dec 16 03:28:51.764000 audit: BPF prog-id=39 op=LOAD Dec 16 03:28:51.765000 audit: BPF prog-id=23 op=UNLOAD Dec 16 03:28:51.765000 audit: BPF prog-id=24 op=UNLOAD Dec 16 03:28:51.765000 audit: BPF prog-id=40 op=LOAD Dec 16 03:28:51.765000 audit: BPF prog-id=21 op=UNLOAD Dec 16 03:28:51.766000 audit: BPF prog-id=41 op=LOAD Dec 16 03:28:51.766000 audit: BPF prog-id=15 op=UNLOAD Dec 16 03:28:51.767000 audit: BPF prog-id=42 op=LOAD Dec 16 03:28:51.774000 audit: BPF prog-id=43 op=LOAD Dec 16 03:28:51.774000 audit: BPF prog-id=16 op=UNLOAD Dec 16 03:28:51.774000 audit: BPF prog-id=17 op=UNLOAD Dec 16 03:28:51.776000 audit: BPF prog-id=44 op=LOAD Dec 16 03:28:51.776000 audit: BPF prog-id=30 op=UNLOAD Dec 16 03:28:51.776000 audit: BPF prog-id=45 op=LOAD Dec 16 03:28:51.776000 audit: BPF prog-id=46 op=LOAD Dec 16 03:28:51.776000 audit: BPF prog-id=28 op=UNLOAD Dec 16 03:28:51.776000 audit: BPF prog-id=29 op=UNLOAD Dec 16 03:28:51.791633 systemd[1]: Reload requested from client PID 1672 ('systemctl') (unit ensure-sysext.service)... Dec 16 03:28:51.791651 systemd[1]: Reloading... Dec 16 03:28:51.818205 systemd-tmpfiles[1674]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 03:28:51.818274 systemd-tmpfiles[1674]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 03:28:51.818725 systemd-tmpfiles[1674]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 03:28:51.820913 systemd-tmpfiles[1674]: ACLs are not supported, ignoring. Dec 16 03:28:51.821010 systemd-tmpfiles[1674]: ACLs are not supported, ignoring. Dec 16 03:28:51.833648 systemd-tmpfiles[1674]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:28:51.833664 systemd-tmpfiles[1674]: Skipping /boot Dec 16 03:28:51.853830 systemd-tmpfiles[1674]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:28:51.853847 systemd-tmpfiles[1674]: Skipping /boot Dec 16 03:28:51.900279 zram_generator::config[1714]: No configuration found. Dec 16 03:28:52.191869 systemd[1]: Reloading finished in 399 ms. Dec 16 03:28:52.208409 systemd-networkd[1463]: eth0: Gained IPv6LL Dec 16 03:28:52.213669 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 03:28:52.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.214000 audit: BPF prog-id=47 op=LOAD Dec 16 03:28:52.214000 audit: BPF prog-id=48 op=LOAD Dec 16 03:28:52.214000 audit: BPF prog-id=45 op=UNLOAD Dec 16 03:28:52.214000 audit: BPF prog-id=46 op=UNLOAD Dec 16 03:28:52.215000 audit: BPF prog-id=49 op=LOAD Dec 16 03:28:52.215000 audit: BPF prog-id=31 op=UNLOAD Dec 16 03:28:52.215000 audit: BPF prog-id=50 op=LOAD Dec 16 03:28:52.215000 audit: BPF prog-id=51 op=LOAD Dec 16 03:28:52.215000 audit: BPF prog-id=32 op=UNLOAD Dec 16 03:28:52.215000 audit: BPF prog-id=33 op=UNLOAD Dec 16 03:28:52.216000 audit: BPF prog-id=52 op=LOAD Dec 16 03:28:52.216000 audit: BPF prog-id=34 op=UNLOAD Dec 16 03:28:52.216000 audit: BPF prog-id=53 op=LOAD Dec 16 03:28:52.216000 audit: BPF prog-id=54 op=LOAD Dec 16 03:28:52.216000 audit: BPF prog-id=35 op=UNLOAD Dec 16 03:28:52.216000 audit: BPF prog-id=36 op=UNLOAD Dec 16 03:28:52.217000 audit: BPF prog-id=55 op=LOAD Dec 16 03:28:52.217000 audit: BPF prog-id=44 op=UNLOAD Dec 16 03:28:52.218000 audit: BPF prog-id=56 op=LOAD Dec 16 03:28:52.218000 audit: BPF prog-id=37 op=UNLOAD Dec 16 03:28:52.218000 audit: BPF prog-id=57 op=LOAD Dec 16 03:28:52.218000 audit: BPF prog-id=58 op=LOAD Dec 16 03:28:52.218000 audit: BPF prog-id=38 op=UNLOAD Dec 16 03:28:52.218000 audit: BPF prog-id=39 op=UNLOAD Dec 16 03:28:52.219000 audit: BPF prog-id=59 op=LOAD Dec 16 03:28:52.219000 audit: BPF prog-id=41 op=UNLOAD Dec 16 03:28:52.219000 audit: BPF prog-id=60 op=LOAD Dec 16 03:28:52.219000 audit: BPF prog-id=61 op=LOAD Dec 16 03:28:52.219000 audit: BPF prog-id=42 op=UNLOAD Dec 16 03:28:52.219000 audit: BPF prog-id=43 op=UNLOAD Dec 16 03:28:52.220000 audit: BPF prog-id=62 op=LOAD Dec 16 03:28:52.220000 audit: BPF prog-id=40 op=UNLOAD Dec 16 03:28:52.229783 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 03:28:52.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.230693 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:28:52.232826 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:28:52.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.242920 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 03:28:52.245119 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:28:52.248364 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 03:28:52.252081 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 03:28:52.260352 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 03:28:52.265352 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 03:28:52.271265 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:28:52.271575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:28:52.274397 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:28:52.286980 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:28:52.293073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:28:52.294504 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:28:52.294844 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:28:52.295010 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:28:52.295168 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:28:52.302174 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:28:52.302500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:28:52.302800 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:28:52.303053 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:28:52.303184 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:28:52.304401 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:28:52.308000 audit[1774]: SYSTEM_BOOT pid=1774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.323862 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:28:52.324143 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:28:52.325702 systemd[1]: Finished ensure-sysext.service. Dec 16 03:28:52.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.328818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:28:52.329085 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:28:52.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.334932 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 03:28:52.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.347149 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 03:28:52.350996 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:28:52.351343 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:28:52.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.358856 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:28:52.359372 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:28:52.362328 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:28:52.363860 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:28:52.364032 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:28:52.364090 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:28:52.364139 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:28:52.364205 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:28:52.364368 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 03:28:52.365361 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:28:52.375383 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:28:52.375673 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:28:52.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:28:52.577000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 03:28:52.577000 audit[1805]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc548c9e80 a2=420 a3=0 items=0 ppid=1770 pid=1805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:28:52.577000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:28:52.579357 augenrules[1805]: No rules Dec 16 03:28:52.582014 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:28:52.586736 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:28:52.874266 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 03:28:52.874911 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 03:28:55.106671 ldconfig[1772]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 03:28:55.119106 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 03:28:55.121154 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 03:28:55.164031 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 03:28:55.165787 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:28:55.166408 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 03:28:55.166867 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 03:28:55.167275 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 03:28:55.167797 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 03:28:55.168301 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 03:28:55.168750 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 03:28:55.169205 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 03:28:55.169582 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 03:28:55.169923 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 03:28:55.169970 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:28:55.170394 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:28:55.171958 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 03:28:55.174028 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 03:28:55.177091 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 03:28:55.177796 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 03:28:55.178222 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 03:28:55.181382 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 03:28:55.182174 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 03:28:55.183375 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 03:28:55.184832 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:28:55.185252 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:28:55.185676 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:28:55.185715 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:28:55.186854 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 03:28:55.189372 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 03:28:55.194436 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 03:28:55.208404 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 03:28:55.212413 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 03:28:55.216587 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 03:28:55.218339 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 03:28:55.225525 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 03:28:55.231489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:28:55.234072 jq[1821]: false Dec 16 03:28:55.235532 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 03:28:55.243494 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 03:28:55.247888 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 03:28:55.253297 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 03:28:55.259671 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 03:28:55.266561 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 03:28:55.298643 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 03:28:55.313126 extend-filesystems[1822]: Found /dev/nvme0n1p6 Dec 16 03:28:55.315533 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 03:28:55.316164 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 03:28:55.317613 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 03:28:55.324227 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 03:28:55.332222 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 03:28:55.337654 google_oslogin_nss_cache[1823]: oslogin_cache_refresh[1823]: Refreshing passwd entry cache Dec 16 03:28:55.345212 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 03:28:55.342291 oslogin_cache_refresh[1823]: Refreshing passwd entry cache Dec 16 03:28:55.348876 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 03:28:55.349458 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 03:28:55.388257 extend-filesystems[1822]: Found /dev/nvme0n1p9 Dec 16 03:28:55.388006 oslogin_cache_refresh[1823]: Failure getting users, quitting Dec 16 03:28:55.388982 google_oslogin_nss_cache[1823]: oslogin_cache_refresh[1823]: Failure getting users, quitting Dec 16 03:28:55.388982 google_oslogin_nss_cache[1823]: oslogin_cache_refresh[1823]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:28:55.388982 google_oslogin_nss_cache[1823]: oslogin_cache_refresh[1823]: Refreshing group entry cache Dec 16 03:28:55.388030 oslogin_cache_refresh[1823]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:28:55.388081 oslogin_cache_refresh[1823]: Refreshing group entry cache Dec 16 03:28:55.403825 google_oslogin_nss_cache[1823]: oslogin_cache_refresh[1823]: Failure getting groups, quitting Dec 16 03:28:55.403825 google_oslogin_nss_cache[1823]: oslogin_cache_refresh[1823]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:28:55.403426 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 03:28:55.402756 oslogin_cache_refresh[1823]: Failure getting groups, quitting Dec 16 03:28:55.403781 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 03:28:55.402773 oslogin_cache_refresh[1823]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:28:55.411205 extend-filesystems[1822]: Checking size of /dev/nvme0n1p9 Dec 16 03:28:55.416216 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 03:28:55.417313 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 03:28:55.430455 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 03:28:55.430852 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 03:28:55.451056 jq[1841]: true Dec 16 03:28:55.467836 update_engine[1839]: I20251216 03:28:55.467317 1839 main.cc:92] Flatcar Update Engine starting Dec 16 03:28:55.477779 coreos-metadata[1818]: Dec 16 03:28:55.477 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 03:28:55.480912 coreos-metadata[1818]: Dec 16 03:28:55.480 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 16 03:28:55.482950 coreos-metadata[1818]: Dec 16 03:28:55.481 INFO Fetch successful Dec 16 03:28:55.483485 dbus-daemon[1819]: [system] SELinux support is enabled Dec 16 03:28:55.485517 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 03:28:55.487891 coreos-metadata[1818]: Dec 16 03:28:55.483 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 16 03:28:55.491699 coreos-metadata[1818]: Dec 16 03:28:55.491 INFO Fetch successful Dec 16 03:28:55.491699 coreos-metadata[1818]: Dec 16 03:28:55.491 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 16 03:28:55.491609 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 03:28:55.491652 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 03:28:55.493380 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 03:28:55.493409 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 03:28:55.494860 coreos-metadata[1818]: Dec 16 03:28:55.494 INFO Fetch successful Dec 16 03:28:55.494860 coreos-metadata[1818]: Dec 16 03:28:55.494 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 16 03:28:55.500104 dbus-daemon[1819]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1463 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 03:28:55.500787 coreos-metadata[1818]: Dec 16 03:28:55.500 INFO Fetch successful Dec 16 03:28:55.500787 coreos-metadata[1818]: Dec 16 03:28:55.500 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 16 03:28:55.503057 coreos-metadata[1818]: Dec 16 03:28:55.501 INFO Fetch failed with 404: resource not found Dec 16 03:28:55.503057 coreos-metadata[1818]: Dec 16 03:28:55.501 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 16 03:28:55.504762 coreos-metadata[1818]: Dec 16 03:28:55.504 INFO Fetch successful Dec 16 03:28:55.504762 coreos-metadata[1818]: Dec 16 03:28:55.504 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 16 03:28:55.506258 jq[1880]: true Dec 16 03:28:55.508174 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 03:28:55.510385 update_engine[1839]: I20251216 03:28:55.509459 1839 update_check_scheduler.cc:74] Next update check in 11m55s Dec 16 03:28:55.509868 systemd[1]: Started update-engine.service - Update Engine. Dec 16 03:28:55.511408 coreos-metadata[1818]: Dec 16 03:28:55.511 INFO Fetch successful Dec 16 03:28:55.511408 coreos-metadata[1818]: Dec 16 03:28:55.511 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 16 03:28:55.512923 ntpd[1826]: ntpd 4.2.8p18@1.4062-o Mon Dec 15 23:46:33 UTC 2025 (1): Starting Dec 16 03:28:55.513784 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: ntpd 4.2.8p18@1.4062-o Mon Dec 15 23:46:33 UTC 2025 (1): Starting Dec 16 03:28:55.513784 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 03:28:55.513784 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: ---------------------------------------------------- Dec 16 03:28:55.513784 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: ntp-4 is maintained by Network Time Foundation, Dec 16 03:28:55.513784 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 03:28:55.513784 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: corporation. Support and training for ntp-4 are Dec 16 03:28:55.513784 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: available at https://www.nwtime.org/support Dec 16 03:28:55.513784 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: ---------------------------------------------------- Dec 16 03:28:55.512988 ntpd[1826]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 03:28:55.513000 ntpd[1826]: ---------------------------------------------------- Dec 16 03:28:55.513009 ntpd[1826]: ntp-4 is maintained by Network Time Foundation, Dec 16 03:28:55.513018 ntpd[1826]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 03:28:55.513029 ntpd[1826]: corporation. Support and training for ntp-4 are Dec 16 03:28:55.513039 ntpd[1826]: available at https://www.nwtime.org/support Dec 16 03:28:55.513049 ntpd[1826]: ---------------------------------------------------- Dec 16 03:28:55.519191 ntpd[1826]: proto: precision = 0.081 usec (-23) Dec 16 03:28:55.522928 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: proto: precision = 0.081 usec (-23) Dec 16 03:28:55.523226 ntpd[1826]: basedate set to 2025-12-03 Dec 16 03:28:55.525456 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: basedate set to 2025-12-03 Dec 16 03:28:55.525456 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: gps base set to 2025-12-07 (week 2396) Dec 16 03:28:55.525456 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 03:28:55.525456 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 03:28:55.525456 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 03:28:55.525456 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: Listen normally on 3 eth0 172.31.30.117:123 Dec 16 03:28:55.525456 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: Listen normally on 4 lo [::1]:123 Dec 16 03:28:55.525456 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: Listen normally on 5 eth0 [fe80::478:87ff:fe02:b47f%2]:123 Dec 16 03:28:55.525456 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: Listening on routing socket on fd #22 for interface updates Dec 16 03:28:55.523267 ntpd[1826]: gps base set to 2025-12-07 (week 2396) Dec 16 03:28:55.523413 ntpd[1826]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 03:28:55.523444 ntpd[1826]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 03:28:55.524437 ntpd[1826]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 03:28:55.524470 ntpd[1826]: Listen normally on 3 eth0 172.31.30.117:123 Dec 16 03:28:55.524503 ntpd[1826]: Listen normally on 4 lo [::1]:123 Dec 16 03:28:55.524531 ntpd[1826]: Listen normally on 5 eth0 [fe80::478:87ff:fe02:b47f%2]:123 Dec 16 03:28:55.524570 ntpd[1826]: Listening on routing socket on fd #22 for interface updates Dec 16 03:28:55.528939 coreos-metadata[1818]: Dec 16 03:28:55.528 INFO Fetch successful Dec 16 03:28:55.531175 coreos-metadata[1818]: Dec 16 03:28:55.531 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 16 03:28:55.532701 ntpd[1826]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 03:28:55.536280 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 03:28:55.536280 ntpd[1826]: 16 Dec 03:28:55 ntpd[1826]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 03:28:55.533021 ntpd[1826]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 03:28:55.537628 coreos-metadata[1818]: Dec 16 03:28:55.537 INFO Fetch successful Dec 16 03:28:55.537628 coreos-metadata[1818]: Dec 16 03:28:55.537 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 16 03:28:55.543263 coreos-metadata[1818]: Dec 16 03:28:55.541 INFO Fetch successful Dec 16 03:28:55.558595 extend-filesystems[1822]: Resized partition /dev/nvme0n1p9 Dec 16 03:28:55.560537 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 03:28:55.568283 extend-filesystems[1896]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 03:28:55.566440 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 03:28:55.576265 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Dec 16 03:28:55.576396 tar[1845]: linux-amd64/LICENSE Dec 16 03:28:55.576396 tar[1845]: linux-amd64/helm Dec 16 03:28:55.589523 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 16 03:28:55.590987 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 03:28:55.625368 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Dec 16 03:28:55.653265 extend-filesystems[1896]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 03:28:55.653265 extend-filesystems[1896]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 03:28:55.653265 extend-filesystems[1896]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Dec 16 03:28:55.652880 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 03:28:55.684735 extend-filesystems[1822]: Resized filesystem in /dev/nvme0n1p9 Dec 16 03:28:55.653198 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 03:28:55.718114 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 03:28:55.719862 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 03:28:55.727940 systemd-logind[1837]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 03:28:55.727981 systemd-logind[1837]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 16 03:28:55.728007 systemd-logind[1837]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 03:28:55.729482 systemd-logind[1837]: New seat seat0. Dec 16 03:28:55.732759 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 03:28:55.849363 bash[1935]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:28:55.850205 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 03:28:55.863611 systemd[1]: Starting sshkeys.service... Dec 16 03:28:55.977105 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 03:28:55.982438 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 03:28:56.097667 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 03:28:56.113678 dbus-daemon[1819]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 03:28:56.117008 dbus-daemon[1819]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1883 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 03:28:56.119449 amazon-ssm-agent[1900]: Initializing new seelog logger Dec 16 03:28:56.127485 amazon-ssm-agent[1900]: New Seelog Logger Creation Complete Dec 16 03:28:56.127485 amazon-ssm-agent[1900]: 2025/12/16 03:28:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 03:28:56.127485 amazon-ssm-agent[1900]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 03:28:56.133269 amazon-ssm-agent[1900]: 2025/12/16 03:28:56 processing appconfig overrides Dec 16 03:28:56.132726 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 03:28:56.140929 amazon-ssm-agent[1900]: 2025/12/16 03:28:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 03:28:56.140929 amazon-ssm-agent[1900]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 03:28:56.140929 amazon-ssm-agent[1900]: 2025/12/16 03:28:56 processing appconfig overrides Dec 16 03:28:56.140929 amazon-ssm-agent[1900]: 2025/12/16 03:28:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 03:28:56.140929 amazon-ssm-agent[1900]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 03:28:56.140929 amazon-ssm-agent[1900]: 2025/12/16 03:28:56 processing appconfig overrides Dec 16 03:28:56.144219 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.1363 INFO Proxy environment variables: Dec 16 03:28:56.153309 amazon-ssm-agent[1900]: 2025/12/16 03:28:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 03:28:56.153667 amazon-ssm-agent[1900]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 03:28:56.153667 amazon-ssm-agent[1900]: 2025/12/16 03:28:56 processing appconfig overrides Dec 16 03:28:56.220487 locksmithd[1888]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 03:28:56.254495 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.1385 INFO https_proxy: Dec 16 03:28:56.333971 coreos-metadata[1954]: Dec 16 03:28:56.333 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 03:28:56.338051 coreos-metadata[1954]: Dec 16 03:28:56.337 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 16 03:28:56.338922 coreos-metadata[1954]: Dec 16 03:28:56.338 INFO Fetch successful Dec 16 03:28:56.338922 coreos-metadata[1954]: Dec 16 03:28:56.338 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 03:28:56.347271 coreos-metadata[1954]: Dec 16 03:28:56.342 INFO Fetch successful Dec 16 03:28:56.353931 unknown[1954]: wrote ssh authorized keys file for user: core Dec 16 03:28:56.355627 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.1385 INFO http_proxy: Dec 16 03:28:56.391909 sshd_keygen[1884]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 03:28:56.414871 update-ssh-keys[2039]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:28:56.415933 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 03:28:56.421731 systemd[1]: Finished sshkeys.service. Dec 16 03:28:56.462547 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.1385 INFO no_proxy: Dec 16 03:28:56.528454 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 03:28:56.540485 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 03:28:56.572783 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.1387 INFO Checking if agent identity type OnPrem can be assumed Dec 16 03:28:56.611053 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 03:28:56.611651 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 03:28:56.619217 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 03:28:56.661526 containerd[1851]: time="2025-12-16T03:28:56Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 03:28:56.665502 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 03:28:56.668098 containerd[1851]: time="2025-12-16T03:28:56.667800152Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 03:28:56.674702 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.1389 INFO Checking if agent identity type EC2 can be assumed Dec 16 03:28:56.678532 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 03:28:56.688778 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 03:28:56.689734 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 03:28:56.718004 containerd[1851]: time="2025-12-16T03:28:56.716545428Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.468µs" Dec 16 03:28:56.718004 containerd[1851]: time="2025-12-16T03:28:56.716609355Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 03:28:56.718004 containerd[1851]: time="2025-12-16T03:28:56.716672996Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 03:28:56.718004 containerd[1851]: time="2025-12-16T03:28:56.716691629Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 03:28:56.718004 containerd[1851]: time="2025-12-16T03:28:56.716915885Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 03:28:56.718004 containerd[1851]: time="2025-12-16T03:28:56.716950334Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:28:56.718004 containerd[1851]: time="2025-12-16T03:28:56.717032958Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:28:56.718004 containerd[1851]: time="2025-12-16T03:28:56.717049128Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:28:56.718004 containerd[1851]: time="2025-12-16T03:28:56.717353127Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:28:56.718004 containerd[1851]: time="2025-12-16T03:28:56.717375896Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:28:56.718004 containerd[1851]: time="2025-12-16T03:28:56.717391358Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:28:56.718004 containerd[1851]: time="2025-12-16T03:28:56.717403608Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:28:56.718524 containerd[1851]: time="2025-12-16T03:28:56.717591825Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:28:56.718524 containerd[1851]: time="2025-12-16T03:28:56.717608344Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 03:28:56.718524 containerd[1851]: time="2025-12-16T03:28:56.717703808Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 03:28:56.718524 containerd[1851]: time="2025-12-16T03:28:56.717902021Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:28:56.718524 containerd[1851]: time="2025-12-16T03:28:56.717934795Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:28:56.718524 containerd[1851]: time="2025-12-16T03:28:56.717948983Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 03:28:56.718524 containerd[1851]: time="2025-12-16T03:28:56.717999390Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 03:28:56.722278 containerd[1851]: time="2025-12-16T03:28:56.721380860Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 03:28:56.722278 containerd[1851]: time="2025-12-16T03:28:56.721513440Z" level=info msg="metadata content store policy set" policy=shared Dec 16 03:28:56.727436 polkitd[1982]: Started polkitd version 126 Dec 16 03:28:56.738010 containerd[1851]: time="2025-12-16T03:28:56.737955771Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 03:28:56.738350 containerd[1851]: time="2025-12-16T03:28:56.738048257Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:28:56.738350 containerd[1851]: time="2025-12-16T03:28:56.738156530Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:28:56.738350 containerd[1851]: time="2025-12-16T03:28:56.738177444Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 03:28:56.738350 containerd[1851]: time="2025-12-16T03:28:56.738196559Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 03:28:56.738350 containerd[1851]: time="2025-12-16T03:28:56.738215364Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 03:28:56.738350 containerd[1851]: time="2025-12-16T03:28:56.738248320Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 03:28:56.738350 containerd[1851]: time="2025-12-16T03:28:56.738262586Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 03:28:56.738350 containerd[1851]: time="2025-12-16T03:28:56.738281306Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 03:28:56.738350 containerd[1851]: time="2025-12-16T03:28:56.738298586Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 03:28:56.738350 containerd[1851]: time="2025-12-16T03:28:56.738314782Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 03:28:56.738350 containerd[1851]: time="2025-12-16T03:28:56.738329597Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 03:28:56.738350 containerd[1851]: time="2025-12-16T03:28:56.738344587Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 03:28:56.738765 containerd[1851]: time="2025-12-16T03:28:56.738361303Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 03:28:56.738765 containerd[1851]: time="2025-12-16T03:28:56.738519084Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 03:28:56.738765 containerd[1851]: time="2025-12-16T03:28:56.738544529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 03:28:56.738765 containerd[1851]: time="2025-12-16T03:28:56.738565259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 03:28:56.738765 containerd[1851]: time="2025-12-16T03:28:56.738581013Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 03:28:56.738765 containerd[1851]: time="2025-12-16T03:28:56.738596267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 03:28:56.738765 containerd[1851]: time="2025-12-16T03:28:56.738610283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 03:28:56.738765 containerd[1851]: time="2025-12-16T03:28:56.738633035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 03:28:56.738765 containerd[1851]: time="2025-12-16T03:28:56.738648726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 03:28:56.738765 containerd[1851]: time="2025-12-16T03:28:56.738665269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 03:28:56.738765 containerd[1851]: time="2025-12-16T03:28:56.738694825Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 03:28:56.738765 containerd[1851]: time="2025-12-16T03:28:56.738710205Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 03:28:56.738765 containerd[1851]: time="2025-12-16T03:28:56.738750405Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 03:28:56.739272 containerd[1851]: time="2025-12-16T03:28:56.738808334Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 03:28:56.739272 containerd[1851]: time="2025-12-16T03:28:56.738827424Z" level=info msg="Start snapshots syncer" Dec 16 03:28:56.739272 containerd[1851]: time="2025-12-16T03:28:56.738876512Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 03:28:56.741024 polkitd[1982]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 03:28:56.741981 polkitd[1982]: Loading rules from directory /run/polkit-1/rules.d Dec 16 03:28:56.742052 polkitd[1982]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 03:28:56.743789 polkitd[1982]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 03:28:56.743842 polkitd[1982]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 03:28:56.743898 polkitd[1982]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 03:28:56.744735 containerd[1851]: time="2025-12-16T03:28:56.744377925Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 03:28:56.744735 containerd[1851]: time="2025-12-16T03:28:56.744479815Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 03:28:56.744969 containerd[1851]: time="2025-12-16T03:28:56.744588832Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 03:28:56.744969 containerd[1851]: time="2025-12-16T03:28:56.744762296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 03:28:56.744969 containerd[1851]: time="2025-12-16T03:28:56.744795988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 03:28:56.744969 containerd[1851]: time="2025-12-16T03:28:56.744815878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 03:28:56.744969 containerd[1851]: time="2025-12-16T03:28:56.744834747Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 03:28:56.744969 containerd[1851]: time="2025-12-16T03:28:56.744860904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 03:28:56.744969 containerd[1851]: time="2025-12-16T03:28:56.744884602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 03:28:56.744969 containerd[1851]: time="2025-12-16T03:28:56.744899133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 03:28:56.744969 containerd[1851]: time="2025-12-16T03:28:56.744914050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 03:28:56.744969 containerd[1851]: time="2025-12-16T03:28:56.744934010Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 03:28:56.746897 containerd[1851]: time="2025-12-16T03:28:56.745013130Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:28:56.746897 containerd[1851]: time="2025-12-16T03:28:56.745047725Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:28:56.746897 containerd[1851]: time="2025-12-16T03:28:56.745060883Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:28:56.746897 containerd[1851]: time="2025-12-16T03:28:56.745073431Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:28:56.746897 containerd[1851]: time="2025-12-16T03:28:56.745085419Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 03:28:56.746897 containerd[1851]: time="2025-12-16T03:28:56.745122467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 03:28:56.746897 containerd[1851]: time="2025-12-16T03:28:56.745137359Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 03:28:56.746897 containerd[1851]: time="2025-12-16T03:28:56.745153455Z" level=info msg="runtime interface created" Dec 16 03:28:56.746897 containerd[1851]: time="2025-12-16T03:28:56.745161908Z" level=info msg="created NRI interface" Dec 16 03:28:56.746897 containerd[1851]: time="2025-12-16T03:28:56.745181349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 03:28:56.746897 containerd[1851]: time="2025-12-16T03:28:56.745202470Z" level=info msg="Connect containerd service" Dec 16 03:28:56.746897 containerd[1851]: time="2025-12-16T03:28:56.745246354Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 03:28:56.746539 polkitd[1982]: Finished loading, compiling and executing 2 rules Dec 16 03:28:56.747805 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 03:28:56.751874 dbus-daemon[1819]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 03:28:56.753279 containerd[1851]: time="2025-12-16T03:28:56.752620784Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 03:28:56.752709 polkitd[1982]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 03:28:56.774155 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.5128 INFO Agent will take identity from EC2 Dec 16 03:28:56.784653 amazon-ssm-agent[1900]: 2025/12/16 03:28:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 03:28:56.784906 amazon-ssm-agent[1900]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 03:28:56.785497 amazon-ssm-agent[1900]: 2025/12/16 03:28:56 processing appconfig overrides Dec 16 03:28:56.799560 systemd-hostnamed[1883]: Hostname set to (transient) Dec 16 03:28:56.799789 systemd-resolved[1430]: System hostname changed to 'ip-172-31-30-117'. Dec 16 03:28:56.825372 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.5146 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 16 03:28:56.825671 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.5146 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 16 03:28:56.825671 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.5146 INFO [amazon-ssm-agent] Starting Core Agent Dec 16 03:28:56.825671 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.5146 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 16 03:28:56.825671 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.5146 INFO [Registrar] Starting registrar module Dec 16 03:28:56.825671 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.5338 INFO [EC2Identity] Checking disk for registration info Dec 16 03:28:56.825671 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.5338 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 16 03:28:56.826170 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.5339 INFO [EC2Identity] Generating registration keypair Dec 16 03:28:56.826170 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.7233 INFO [EC2Identity] Checking write access before registering Dec 16 03:28:56.826170 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.7294 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 16 03:28:56.826170 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.7842 INFO [EC2Identity] EC2 registration was successful. Dec 16 03:28:56.826170 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.7843 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 16 03:28:56.826170 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.7844 INFO [CredentialRefresher] credentialRefresher has started Dec 16 03:28:56.826170 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.7844 INFO [CredentialRefresher] Starting credentials refresher loop Dec 16 03:28:56.826170 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.8250 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 16 03:28:56.826170 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.8252 INFO [CredentialRefresher] Credentials ready Dec 16 03:28:56.873150 amazon-ssm-agent[1900]: 2025-12-16 03:28:56.8261 INFO [CredentialRefresher] Next credential rotation will be in 29.99998289045 minutes Dec 16 03:28:57.060303 tar[1845]: linux-amd64/README.md Dec 16 03:28:57.081886 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 03:28:57.089846 containerd[1851]: time="2025-12-16T03:28:57.089793339Z" level=info msg="Start subscribing containerd event" Dec 16 03:28:57.090154 containerd[1851]: time="2025-12-16T03:28:57.090124072Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 03:28:57.090223 containerd[1851]: time="2025-12-16T03:28:57.090185404Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 03:28:57.090309 containerd[1851]: time="2025-12-16T03:28:57.090281871Z" level=info msg="Start recovering state" Dec 16 03:28:57.090403 containerd[1851]: time="2025-12-16T03:28:57.090382128Z" level=info msg="Start event monitor" Dec 16 03:28:57.091598 containerd[1851]: time="2025-12-16T03:28:57.091539560Z" level=info msg="Start cni network conf syncer for default" Dec 16 03:28:57.091598 containerd[1851]: time="2025-12-16T03:28:57.091564960Z" level=info msg="Start streaming server" Dec 16 03:28:57.091598 containerd[1851]: time="2025-12-16T03:28:57.091577223Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 03:28:57.091598 containerd[1851]: time="2025-12-16T03:28:57.091587217Z" level=info msg="runtime interface starting up..." Dec 16 03:28:57.091598 containerd[1851]: time="2025-12-16T03:28:57.091596849Z" level=info msg="starting plugins..." Dec 16 03:28:57.091823 containerd[1851]: time="2025-12-16T03:28:57.091618433Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 03:28:57.091823 containerd[1851]: time="2025-12-16T03:28:57.091799929Z" level=info msg="containerd successfully booted in 0.430853s" Dec 16 03:28:57.093662 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 03:28:57.841991 amazon-ssm-agent[1900]: 2025-12-16 03:28:57.8418 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 16 03:28:57.942583 amazon-ssm-agent[1900]: 2025-12-16 03:28:57.8456 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2097) started Dec 16 03:28:58.044095 amazon-ssm-agent[1900]: 2025-12-16 03:28:57.8456 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 16 03:28:59.397004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:28:59.399399 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 03:28:59.401349 systemd[1]: Startup finished in 3.596s (kernel) + 10.247s (initrd) + 12.419s (userspace) = 26.263s. Dec 16 03:28:59.407383 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:29:00.706559 kubelet[2114]: E1216 03:29:00.706455 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:29:00.713565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:29:00.713792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:29:00.716348 systemd[1]: kubelet.service: Consumed 1.129s CPU time, 267.2M memory peak. Dec 16 03:29:01.244023 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 03:29:01.246608 systemd[1]: Started sshd@0-172.31.30.117:22-147.75.109.163:48918.service - OpenSSH per-connection server daemon (147.75.109.163:48918). Dec 16 03:29:01.631805 sshd[2126]: Accepted publickey for core from 147.75.109.163 port 48918 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:29:01.636837 sshd-session[2126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:01.654051 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 03:29:01.655924 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 03:29:01.666311 systemd-logind[1837]: New session 1 of user core. Dec 16 03:29:01.684066 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 03:29:01.687759 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 03:29:01.713832 (systemd)[2132]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:01.720781 systemd-logind[1837]: New session 2 of user core. Dec 16 03:29:01.958081 systemd[2132]: Queued start job for default target default.target. Dec 16 03:29:01.967153 systemd[2132]: Created slice app.slice - User Application Slice. Dec 16 03:29:01.967224 systemd[2132]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 03:29:01.967271 systemd[2132]: Reached target paths.target - Paths. Dec 16 03:29:01.967355 systemd[2132]: Reached target timers.target - Timers. Dec 16 03:29:01.974500 systemd[2132]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 03:29:01.976660 systemd[2132]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 03:29:02.009002 systemd[2132]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 03:29:02.009440 systemd[2132]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 03:29:02.010635 systemd[2132]: Reached target sockets.target - Sockets. Dec 16 03:29:02.010718 systemd[2132]: Reached target basic.target - Basic System. Dec 16 03:29:02.010775 systemd[2132]: Reached target default.target - Main User Target. Dec 16 03:29:02.010817 systemd[2132]: Startup finished in 280ms. Dec 16 03:29:02.011106 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 03:29:02.019579 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 03:29:02.113509 systemd[1]: Started sshd@1-172.31.30.117:22-147.75.109.163:46842.service - OpenSSH per-connection server daemon (147.75.109.163:46842). Dec 16 03:29:02.304080 sshd[2146]: Accepted publickey for core from 147.75.109.163 port 46842 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:29:02.306918 sshd-session[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:02.314307 systemd-logind[1837]: New session 3 of user core. Dec 16 03:29:02.331131 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 03:29:02.395939 sshd[2150]: Connection closed by 147.75.109.163 port 46842 Dec 16 03:29:02.397027 sshd-session[2146]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:02.406730 systemd[1]: sshd@1-172.31.30.117:22-147.75.109.163:46842.service: Deactivated successfully. Dec 16 03:29:02.411088 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 03:29:02.412956 systemd-logind[1837]: Session 3 logged out. Waiting for processes to exit. Dec 16 03:29:02.414845 systemd-logind[1837]: Removed session 3. Dec 16 03:29:02.458923 systemd[1]: Started sshd@2-172.31.30.117:22-147.75.109.163:46854.service - OpenSSH per-connection server daemon (147.75.109.163:46854). Dec 16 03:29:03.489628 systemd-resolved[1430]: Clock change detected. Flushing caches. Dec 16 03:29:03.625906 sshd[2156]: Accepted publickey for core from 147.75.109.163 port 46854 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:29:03.627936 sshd-session[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:03.646153 systemd-logind[1837]: New session 4 of user core. Dec 16 03:29:03.658472 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 03:29:03.723713 sshd[2160]: Connection closed by 147.75.109.163 port 46854 Dec 16 03:29:03.725059 sshd-session[2156]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:03.731310 systemd[1]: sshd@2-172.31.30.117:22-147.75.109.163:46854.service: Deactivated successfully. Dec 16 03:29:03.733503 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 03:29:03.734564 systemd-logind[1837]: Session 4 logged out. Waiting for processes to exit. Dec 16 03:29:03.736676 systemd-logind[1837]: Removed session 4. Dec 16 03:29:03.759656 systemd[1]: Started sshd@3-172.31.30.117:22-147.75.109.163:46856.service - OpenSSH per-connection server daemon (147.75.109.163:46856). Dec 16 03:29:03.958938 sshd[2166]: Accepted publickey for core from 147.75.109.163 port 46856 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:29:03.960882 sshd-session[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:03.967979 systemd-logind[1837]: New session 5 of user core. Dec 16 03:29:03.974492 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 03:29:04.035397 sshd[2170]: Connection closed by 147.75.109.163 port 46856 Dec 16 03:29:04.036125 sshd-session[2166]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:04.042927 systemd[1]: sshd@3-172.31.30.117:22-147.75.109.163:46856.service: Deactivated successfully. Dec 16 03:29:04.045636 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 03:29:04.047706 systemd-logind[1837]: Session 5 logged out. Waiting for processes to exit. Dec 16 03:29:04.050745 systemd-logind[1837]: Removed session 5. Dec 16 03:29:04.071898 systemd[1]: Started sshd@4-172.31.30.117:22-147.75.109.163:46872.service - OpenSSH per-connection server daemon (147.75.109.163:46872). Dec 16 03:29:04.248319 sshd[2176]: Accepted publickey for core from 147.75.109.163 port 46872 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:29:04.250314 sshd-session[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:04.256247 systemd-logind[1837]: New session 6 of user core. Dec 16 03:29:04.263687 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 03:29:04.397653 sudo[2181]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 03:29:04.398971 sudo[2181]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:29:04.415730 sudo[2181]: pam_unix(sudo:session): session closed for user root Dec 16 03:29:04.438129 sshd[2180]: Connection closed by 147.75.109.163 port 46872 Dec 16 03:29:04.439163 sshd-session[2176]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:04.447782 systemd[1]: sshd@4-172.31.30.117:22-147.75.109.163:46872.service: Deactivated successfully. Dec 16 03:29:04.450678 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 03:29:04.451834 systemd-logind[1837]: Session 6 logged out. Waiting for processes to exit. Dec 16 03:29:04.454094 systemd-logind[1837]: Removed session 6. Dec 16 03:29:04.483825 systemd[1]: Started sshd@5-172.31.30.117:22-147.75.109.163:46886.service - OpenSSH per-connection server daemon (147.75.109.163:46886). Dec 16 03:29:04.660010 sshd[2188]: Accepted publickey for core from 147.75.109.163 port 46886 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:29:04.668374 sshd-session[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:04.679251 systemd-logind[1837]: New session 7 of user core. Dec 16 03:29:04.689825 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 03:29:04.735614 sudo[2194]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 03:29:04.736020 sudo[2194]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:29:04.741156 sudo[2194]: pam_unix(sudo:session): session closed for user root Dec 16 03:29:04.748648 sudo[2193]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 03:29:04.749051 sudo[2193]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:29:04.757668 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:29:04.796000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:29:04.798415 augenrules[2218]: No rules Dec 16 03:29:04.798763 kernel: kauditd_printk_skb: 115 callbacks suppressed Dec 16 03:29:04.798826 kernel: audit: type=1305 audit(1765855744.796:240): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:29:04.796000 audit[2218]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff2d0130e0 a2=420 a3=0 items=0 ppid=2199 pid=2218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:04.800721 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:29:04.801305 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:29:04.804995 kernel: audit: type=1300 audit(1765855744.796:240): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff2d0130e0 a2=420 a3=0 items=0 ppid=2199 pid=2218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:04.805283 sudo[2193]: pam_unix(sudo:session): session closed for user root Dec 16 03:29:04.796000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:29:04.808189 kernel: audit: type=1327 audit(1765855744.796:240): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:29:04.808236 kernel: audit: type=1130 audit(1765855744.801:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:04.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:04.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:04.811269 kernel: audit: type=1131 audit(1765855744.801:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:04.804000 audit[2193]: USER_END pid=2193 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:29:04.817302 kernel: audit: type=1106 audit(1765855744.804:243): pid=2193 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:29:04.817378 kernel: audit: type=1104 audit(1765855744.804:244): pid=2193 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:29:04.804000 audit[2193]: CRED_DISP pid=2193 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:29:04.827400 sshd[2192]: Connection closed by 147.75.109.163 port 46886 Dec 16 03:29:04.828311 sshd-session[2188]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:04.829000 audit[2188]: USER_END pid=2188 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:04.835113 systemd[1]: sshd@5-172.31.30.117:22-147.75.109.163:46886.service: Deactivated successfully. Dec 16 03:29:04.842872 kernel: audit: type=1106 audit(1765855744.829:245): pid=2188 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:04.842985 kernel: audit: type=1104 audit(1765855744.829:246): pid=2188 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:04.829000 audit[2188]: CRED_DISP pid=2188 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:04.838306 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 03:29:04.840368 systemd-logind[1837]: Session 7 logged out. Waiting for processes to exit. Dec 16 03:29:04.842682 systemd-logind[1837]: Removed session 7. Dec 16 03:29:04.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.30.117:22-147.75.109.163:46886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:04.848329 kernel: audit: type=1131 audit(1765855744.834:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.30.117:22-147.75.109.163:46886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:04.865311 systemd[1]: Started sshd@6-172.31.30.117:22-147.75.109.163:46892.service - OpenSSH per-connection server daemon (147.75.109.163:46892). Dec 16 03:29:04.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.30.117:22-147.75.109.163:46892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:05.064000 audit[2227]: USER_ACCT pid=2227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:05.065760 sshd[2227]: Accepted publickey for core from 147.75.109.163 port 46892 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:29:05.072000 audit[2227]: CRED_ACQ pid=2227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:05.072000 audit[2227]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5c9e6c70 a2=3 a3=0 items=0 ppid=1 pid=2227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:05.072000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:29:05.074011 sshd-session[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:29:05.128279 systemd-logind[1837]: New session 8 of user core. Dec 16 03:29:05.136971 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 03:29:05.139000 audit[2227]: USER_START pid=2227 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:05.147000 audit[2231]: CRED_ACQ pid=2231 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:05.191000 audit[2232]: USER_ACCT pid=2232 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:29:05.193148 sudo[2232]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 03:29:05.192000 audit[2232]: CRED_REFR pid=2232 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:29:05.192000 audit[2232]: USER_START pid=2232 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:29:05.193606 sudo[2232]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:29:07.393362 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 03:29:07.404888 (dockerd)[2251]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 03:29:08.495202 dockerd[2251]: time="2025-12-16T03:29:08.495046288Z" level=info msg="Starting up" Dec 16 03:29:08.497194 dockerd[2251]: time="2025-12-16T03:29:08.496765856Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 03:29:08.511641 dockerd[2251]: time="2025-12-16T03:29:08.511595517Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 03:29:08.566793 dockerd[2251]: time="2025-12-16T03:29:08.566744321Z" level=info msg="Loading containers: start." Dec 16 03:29:08.580206 kernel: Initializing XFRM netlink socket Dec 16 03:29:08.665000 audit[2299]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.665000 audit[2299]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffef13837a0 a2=0 a3=0 items=0 ppid=2251 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.665000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:29:08.668000 audit[2301]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2301 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.668000 audit[2301]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff5b203500 a2=0 a3=0 items=0 ppid=2251 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.668000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:29:08.670000 audit[2303]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2303 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.670000 audit[2303]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5093af80 a2=0 a3=0 items=0 ppid=2251 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.670000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:29:08.673000 audit[2305]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2305 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.673000 audit[2305]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd486f8d70 a2=0 a3=0 items=0 ppid=2251 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.673000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:29:08.675000 audit[2307]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2307 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.675000 audit[2307]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdc8a4e970 a2=0 a3=0 items=0 ppid=2251 pid=2307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.675000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:29:08.677000 audit[2309]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.677000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffcdf5f060 a2=0 a3=0 items=0 ppid=2251 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.677000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:29:08.680000 audit[2311]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2311 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.680000 audit[2311]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd87437ec0 a2=0 a3=0 items=0 ppid=2251 pid=2311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.680000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:29:08.682000 audit[2313]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2313 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.682000 audit[2313]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe8e61de60 a2=0 a3=0 items=0 ppid=2251 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.682000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:29:08.732000 audit[2316]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2316 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.732000 audit[2316]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffe3d1a3b40 a2=0 a3=0 items=0 ppid=2251 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.732000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 03:29:08.737000 audit[2318]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.737000 audit[2318]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff50795780 a2=0 a3=0 items=0 ppid=2251 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.737000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:29:08.740000 audit[2320]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.740000 audit[2320]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcb3ee8020 a2=0 a3=0 items=0 ppid=2251 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.740000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:29:08.742000 audit[2322]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.742000 audit[2322]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc0c554b40 a2=0 a3=0 items=0 ppid=2251 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:29:08.745000 audit[2324]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.745000 audit[2324]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdf621ce60 a2=0 a3=0 items=0 ppid=2251 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.745000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:29:08.793000 audit[2354]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2354 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.793000 audit[2354]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe5e1e7ad0 a2=0 a3=0 items=0 ppid=2251 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.793000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:29:08.795000 audit[2356]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2356 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.795000 audit[2356]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff511bd050 a2=0 a3=0 items=0 ppid=2251 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.795000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:29:08.798000 audit[2358]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.798000 audit[2358]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3a502560 a2=0 a3=0 items=0 ppid=2251 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.798000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:29:08.800000 audit[2360]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.800000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc046d7420 a2=0 a3=0 items=0 ppid=2251 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.800000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:29:08.802000 audit[2362]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.802000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf621cc50 a2=0 a3=0 items=0 ppid=2251 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.802000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:29:08.805000 audit[2364]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2364 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.805000 audit[2364]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd57f4e390 a2=0 a3=0 items=0 ppid=2251 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.805000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:29:08.807000 audit[2366]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2366 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.807000 audit[2366]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc18522c30 a2=0 a3=0 items=0 ppid=2251 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.807000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:29:08.810000 audit[2368]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2368 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.810000 audit[2368]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc6c8b3580 a2=0 a3=0 items=0 ppid=2251 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.810000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:29:08.812000 audit[2370]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.812000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd013588f0 a2=0 a3=0 items=0 ppid=2251 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.812000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 03:29:08.814000 audit[2372]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.814000 audit[2372]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcf2d98090 a2=0 a3=0 items=0 ppid=2251 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.814000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:29:08.817000 audit[2374]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.817000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff2e64c140 a2=0 a3=0 items=0 ppid=2251 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.817000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:29:08.819000 audit[2376]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.819000 audit[2376]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe8a10c700 a2=0 a3=0 items=0 ppid=2251 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.819000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:29:08.821000 audit[2378]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.821000 audit[2378]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe7e76cba0 a2=0 a3=0 items=0 ppid=2251 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:29:08.827000 audit[2383]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.827000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd5d98e5a0 a2=0 a3=0 items=0 ppid=2251 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.827000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:29:08.830000 audit[2385]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2385 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.830000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffde7acb090 a2=0 a3=0 items=0 ppid=2251 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.830000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:29:08.832000 audit[2387]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.832000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdcb974410 a2=0 a3=0 items=0 ppid=2251 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.832000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:29:08.834000 audit[2389]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.834000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff0012ac90 a2=0 a3=0 items=0 ppid=2251 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.834000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:29:08.837000 audit[2391]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.837000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffef86f7a80 a2=0 a3=0 items=0 ppid=2251 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.837000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:29:08.839000 audit[2393]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:08.839000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc3cfcec90 a2=0 a3=0 items=0 ppid=2251 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.839000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:29:08.850467 (udev-worker)[2272]: Network interface NamePolicy= disabled on kernel command line. Dec 16 03:29:08.860000 audit[2398]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.860000 audit[2398]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc998a8970 a2=0 a3=0 items=0 ppid=2251 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.860000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 03:29:08.863000 audit[2400]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.863000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffcc412c9f0 a2=0 a3=0 items=0 ppid=2251 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.863000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 03:29:08.873000 audit[2408]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.873000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc65cff090 a2=0 a3=0 items=0 ppid=2251 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.873000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 03:29:08.885000 audit[2414]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.885000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcd8b67df0 a2=0 a3=0 items=0 ppid=2251 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.885000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 03:29:08.888000 audit[2416]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.888000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff00bed010 a2=0 a3=0 items=0 ppid=2251 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.888000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 03:29:08.891000 audit[2418]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.891000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc3d7394a0 a2=0 a3=0 items=0 ppid=2251 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.891000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 03:29:08.893000 audit[2420]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.893000 audit[2420]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff9db3f260 a2=0 a3=0 items=0 ppid=2251 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.893000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:29:08.896000 audit[2422]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:08.896000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe85190060 a2=0 a3=0 items=0 ppid=2251 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:08.896000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 03:29:08.898293 systemd-networkd[1463]: docker0: Link UP Dec 16 03:29:08.902840 dockerd[2251]: time="2025-12-16T03:29:08.902789179Z" level=info msg="Loading containers: done." Dec 16 03:29:08.950688 dockerd[2251]: time="2025-12-16T03:29:08.950612161Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 03:29:08.950886 dockerd[2251]: time="2025-12-16T03:29:08.950728097Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 03:29:08.950886 dockerd[2251]: time="2025-12-16T03:29:08.950830736Z" level=info msg="Initializing buildkit" Dec 16 03:29:08.999193 dockerd[2251]: time="2025-12-16T03:29:08.998925935Z" level=info msg="Completed buildkit initialization" Dec 16 03:29:09.013193 dockerd[2251]: time="2025-12-16T03:29:09.013097327Z" level=info msg="Daemon has completed initialization" Dec 16 03:29:09.013688 dockerd[2251]: time="2025-12-16T03:29:09.013296748Z" level=info msg="API listen on /run/docker.sock" Dec 16 03:29:09.013487 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 03:29:09.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:10.789194 containerd[1851]: time="2025-12-16T03:29:10.788931542Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 03:29:11.473063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3297609707.mount: Deactivated successfully. Dec 16 03:29:11.932298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 03:29:11.936478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:29:12.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:12.206339 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 03:29:12.206395 kernel: audit: type=1130 audit(1765855752.204:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:12.205449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:29:12.218817 (kubelet)[2527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:29:12.315595 kubelet[2527]: E1216 03:29:12.315513 2527 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:29:12.320907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:29:12.321090 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:29:12.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:29:12.325963 systemd[1]: kubelet.service: Consumed 221ms CPU time, 108.9M memory peak. Dec 16 03:29:12.326523 kernel: audit: type=1131 audit(1765855752.320:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:29:12.925597 containerd[1851]: time="2025-12-16T03:29:12.925545123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:12.927461 containerd[1851]: time="2025-12-16T03:29:12.927420791Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=27403560" Dec 16 03:29:12.929811 containerd[1851]: time="2025-12-16T03:29:12.929747647Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:12.933746 containerd[1851]: time="2025-12-16T03:29:12.933690434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:12.934765 containerd[1851]: time="2025-12-16T03:29:12.934598445Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 2.145625334s" Dec 16 03:29:12.934765 containerd[1851]: time="2025-12-16T03:29:12.934631323Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 03:29:12.935424 containerd[1851]: time="2025-12-16T03:29:12.935366437Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 03:29:14.893599 containerd[1851]: time="2025-12-16T03:29:14.893544144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:14.895951 containerd[1851]: time="2025-12-16T03:29:14.895690739Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24983855" Dec 16 03:29:14.898480 containerd[1851]: time="2025-12-16T03:29:14.898427331Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:14.902856 containerd[1851]: time="2025-12-16T03:29:14.902784761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:14.904212 containerd[1851]: time="2025-12-16T03:29:14.903680813Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.968280704s" Dec 16 03:29:14.904212 containerd[1851]: time="2025-12-16T03:29:14.903716273Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 03:29:14.904576 containerd[1851]: time="2025-12-16T03:29:14.904552180Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 03:29:16.500116 containerd[1851]: time="2025-12-16T03:29:16.500061687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:16.502653 containerd[1851]: time="2025-12-16T03:29:16.502402982Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19396111" Dec 16 03:29:16.505024 containerd[1851]: time="2025-12-16T03:29:16.504972103Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:16.510361 containerd[1851]: time="2025-12-16T03:29:16.510297553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:16.511349 containerd[1851]: time="2025-12-16T03:29:16.511160629Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.606510463s" Dec 16 03:29:16.511349 containerd[1851]: time="2025-12-16T03:29:16.511209849Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 03:29:16.512183 containerd[1851]: time="2025-12-16T03:29:16.512144915Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 03:29:17.552977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295256881.mount: Deactivated successfully. Dec 16 03:29:18.156559 containerd[1851]: time="2025-12-16T03:29:18.156326942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:18.158696 containerd[1851]: time="2025-12-16T03:29:18.158639268Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=0" Dec 16 03:29:18.160966 containerd[1851]: time="2025-12-16T03:29:18.160879735Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:18.165381 containerd[1851]: time="2025-12-16T03:29:18.165308753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:18.166145 containerd[1851]: time="2025-12-16T03:29:18.165729319Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.653481733s" Dec 16 03:29:18.166145 containerd[1851]: time="2025-12-16T03:29:18.165764207Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 03:29:18.166705 containerd[1851]: time="2025-12-16T03:29:18.166619441Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 03:29:18.815132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1051893739.mount: Deactivated successfully. Dec 16 03:29:19.887163 containerd[1851]: time="2025-12-16T03:29:19.887096859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:19.889248 containerd[1851]: time="2025-12-16T03:29:19.888978628Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569975" Dec 16 03:29:19.891672 containerd[1851]: time="2025-12-16T03:29:19.891616943Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:19.898831 containerd[1851]: time="2025-12-16T03:29:19.897437772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:19.898831 containerd[1851]: time="2025-12-16T03:29:19.898674301Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.731961799s" Dec 16 03:29:19.898831 containerd[1851]: time="2025-12-16T03:29:19.898711099Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 03:29:19.900566 containerd[1851]: time="2025-12-16T03:29:19.900336311Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 03:29:20.399619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount932609163.mount: Deactivated successfully. Dec 16 03:29:20.412786 containerd[1851]: time="2025-12-16T03:29:20.412725833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:29:20.414865 containerd[1851]: time="2025-12-16T03:29:20.414821363Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 03:29:20.417317 containerd[1851]: time="2025-12-16T03:29:20.417249064Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:29:20.421090 containerd[1851]: time="2025-12-16T03:29:20.421019013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:29:20.422678 containerd[1851]: time="2025-12-16T03:29:20.422113539Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 521.555521ms" Dec 16 03:29:20.422678 containerd[1851]: time="2025-12-16T03:29:20.422148865Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 03:29:20.422678 containerd[1851]: time="2025-12-16T03:29:20.422570511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 03:29:21.055052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3693868850.mount: Deactivated successfully. Dec 16 03:29:22.432080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 03:29:22.436396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:29:22.985388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:29:22.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:22.990339 kernel: audit: type=1130 audit(1765855762.984:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:22.998904 (kubelet)[2666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:29:23.129496 kubelet[2666]: E1216 03:29:23.129441 2666 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:29:23.133446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:29:23.133635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:29:23.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:29:23.134479 systemd[1]: kubelet.service: Consumed 221ms CPU time, 105.7M memory peak. Dec 16 03:29:23.139293 kernel: audit: type=1131 audit(1765855763.133:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:29:23.389731 containerd[1851]: time="2025-12-16T03:29:23.389269962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:23.391309 containerd[1851]: time="2025-12-16T03:29:23.391268874Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55729072" Dec 16 03:29:23.394099 containerd[1851]: time="2025-12-16T03:29:23.393647176Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:23.397939 containerd[1851]: time="2025-12-16T03:29:23.397882348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:23.398887 containerd[1851]: time="2025-12-16T03:29:23.398835086Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.976243764s" Dec 16 03:29:23.398887 containerd[1851]: time="2025-12-16T03:29:23.398870745Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 03:29:25.327945 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:29:25.328711 systemd[1]: kubelet.service: Consumed 221ms CPU time, 105.7M memory peak. Dec 16 03:29:25.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:25.334228 kernel: audit: type=1130 audit(1765855765.327:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:25.333484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:29:25.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:25.340966 kernel: audit: type=1131 audit(1765855765.327:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:25.375323 systemd[1]: Reload requested from client PID 2703 ('systemctl') (unit session-8.scope)... Dec 16 03:29:25.375345 systemd[1]: Reloading... Dec 16 03:29:25.526250 zram_generator::config[2751]: No configuration found. Dec 16 03:29:25.827986 systemd[1]: Reloading finished in 452 ms. Dec 16 03:29:25.860420 kernel: audit: type=1334 audit(1765855765.852:304): prog-id=70 op=LOAD Dec 16 03:29:25.860523 kernel: audit: type=1334 audit(1765855765.852:305): prog-id=71 op=LOAD Dec 16 03:29:25.860547 kernel: audit: type=1334 audit(1765855765.852:306): prog-id=47 op=UNLOAD Dec 16 03:29:25.852000 audit: BPF prog-id=70 op=LOAD Dec 16 03:29:25.852000 audit: BPF prog-id=71 op=LOAD Dec 16 03:29:25.852000 audit: BPF prog-id=47 op=UNLOAD Dec 16 03:29:25.863478 kernel: audit: type=1334 audit(1765855765.852:307): prog-id=48 op=UNLOAD Dec 16 03:29:25.863560 kernel: audit: type=1334 audit(1765855765.856:308): prog-id=72 op=LOAD Dec 16 03:29:25.852000 audit: BPF prog-id=48 op=UNLOAD Dec 16 03:29:25.856000 audit: BPF prog-id=72 op=LOAD Dec 16 03:29:25.869531 kernel: audit: type=1334 audit(1765855765.856:309): prog-id=55 op=UNLOAD Dec 16 03:29:25.856000 audit: BPF prog-id=55 op=UNLOAD Dec 16 03:29:25.856000 audit: BPF prog-id=73 op=LOAD Dec 16 03:29:25.856000 audit: BPF prog-id=56 op=UNLOAD Dec 16 03:29:25.857000 audit: BPF prog-id=74 op=LOAD Dec 16 03:29:25.857000 audit: BPF prog-id=75 op=LOAD Dec 16 03:29:25.857000 audit: BPF prog-id=57 op=UNLOAD Dec 16 03:29:25.857000 audit: BPF prog-id=58 op=UNLOAD Dec 16 03:29:25.858000 audit: BPF prog-id=76 op=LOAD Dec 16 03:29:25.858000 audit: BPF prog-id=69 op=UNLOAD Dec 16 03:29:25.860000 audit: BPF prog-id=77 op=LOAD Dec 16 03:29:25.860000 audit: BPF prog-id=59 op=UNLOAD Dec 16 03:29:25.860000 audit: BPF prog-id=78 op=LOAD Dec 16 03:29:25.860000 audit: BPF prog-id=79 op=LOAD Dec 16 03:29:25.860000 audit: BPF prog-id=60 op=UNLOAD Dec 16 03:29:25.860000 audit: BPF prog-id=61 op=UNLOAD Dec 16 03:29:25.861000 audit: BPF prog-id=80 op=LOAD Dec 16 03:29:25.861000 audit: BPF prog-id=63 op=UNLOAD Dec 16 03:29:25.861000 audit: BPF prog-id=81 op=LOAD Dec 16 03:29:25.861000 audit: BPF prog-id=82 op=LOAD Dec 16 03:29:25.861000 audit: BPF prog-id=64 op=UNLOAD Dec 16 03:29:25.861000 audit: BPF prog-id=65 op=UNLOAD Dec 16 03:29:25.862000 audit: BPF prog-id=83 op=LOAD Dec 16 03:29:25.862000 audit: BPF prog-id=66 op=UNLOAD Dec 16 03:29:25.862000 audit: BPF prog-id=84 op=LOAD Dec 16 03:29:25.862000 audit: BPF prog-id=85 op=LOAD Dec 16 03:29:25.862000 audit: BPF prog-id=67 op=UNLOAD Dec 16 03:29:25.862000 audit: BPF prog-id=68 op=UNLOAD Dec 16 03:29:25.863000 audit: BPF prog-id=86 op=LOAD Dec 16 03:29:25.863000 audit: BPF prog-id=52 op=UNLOAD Dec 16 03:29:25.863000 audit: BPF prog-id=87 op=LOAD Dec 16 03:29:25.864000 audit: BPF prog-id=88 op=LOAD Dec 16 03:29:25.864000 audit: BPF prog-id=53 op=UNLOAD Dec 16 03:29:25.864000 audit: BPF prog-id=54 op=UNLOAD Dec 16 03:29:25.864000 audit: BPF prog-id=89 op=LOAD Dec 16 03:29:25.864000 audit: BPF prog-id=62 op=UNLOAD Dec 16 03:29:25.865000 audit: BPF prog-id=90 op=LOAD Dec 16 03:29:25.865000 audit: BPF prog-id=49 op=UNLOAD Dec 16 03:29:25.865000 audit: BPF prog-id=91 op=LOAD Dec 16 03:29:25.865000 audit: BPF prog-id=92 op=LOAD Dec 16 03:29:25.865000 audit: BPF prog-id=50 op=UNLOAD Dec 16 03:29:25.865000 audit: BPF prog-id=51 op=UNLOAD Dec 16 03:29:25.880969 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 03:29:25.881083 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 03:29:25.881498 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:29:25.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:29:25.881571 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.4M memory peak. Dec 16 03:29:25.883577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:29:26.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:26.115584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:29:26.127576 (kubelet)[2812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:29:26.208950 kubelet[2812]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:29:26.208950 kubelet[2812]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:29:26.208950 kubelet[2812]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:29:26.213830 kubelet[2812]: I1216 03:29:26.213750 2812 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:29:26.632219 kubelet[2812]: I1216 03:29:26.631495 2812 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 03:29:26.632219 kubelet[2812]: I1216 03:29:26.631660 2812 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:29:26.632219 kubelet[2812]: I1216 03:29:26.632051 2812 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 03:29:26.711136 kubelet[2812]: E1216 03:29:26.710982 2812 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.117:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:29:26.711330 kubelet[2812]: I1216 03:29:26.711311 2812 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:29:26.739462 kubelet[2812]: I1216 03:29:26.739425 2812 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:29:26.746719 kubelet[2812]: I1216 03:29:26.746686 2812 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:29:26.755465 kubelet[2812]: I1216 03:29:26.755378 2812 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:29:26.755801 kubelet[2812]: I1216 03:29:26.755457 2812 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-117","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:29:26.758247 kubelet[2812]: I1216 03:29:26.758197 2812 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:29:26.758247 kubelet[2812]: I1216 03:29:26.758247 2812 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 03:29:26.760025 kubelet[2812]: I1216 03:29:26.759965 2812 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:29:26.765726 kubelet[2812]: I1216 03:29:26.765621 2812 kubelet.go:446] "Attempting to sync node with API server" Dec 16 03:29:26.765726 kubelet[2812]: I1216 03:29:26.765670 2812 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:29:26.767385 kubelet[2812]: I1216 03:29:26.767331 2812 kubelet.go:352] "Adding apiserver pod source" Dec 16 03:29:26.767385 kubelet[2812]: I1216 03:29:26.767366 2812 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:29:26.773909 kubelet[2812]: W1216 03:29:26.773782 2812 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-117&limit=500&resourceVersion=0": dial tcp 172.31.30.117:6443: connect: connection refused Dec 16 03:29:26.773909 kubelet[2812]: E1216 03:29:26.773841 2812 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-117&limit=500&resourceVersion=0\": dial tcp 172.31.30.117:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:29:26.776641 kubelet[2812]: W1216 03:29:26.776556 2812 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.117:6443: connect: connection refused Dec 16 03:29:26.776641 kubelet[2812]: E1216 03:29:26.776632 2812 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.117:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:29:26.778454 kubelet[2812]: I1216 03:29:26.778151 2812 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:29:26.783521 kubelet[2812]: I1216 03:29:26.783490 2812 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 03:29:26.799199 kubelet[2812]: W1216 03:29:26.798022 2812 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 03:29:26.799199 kubelet[2812]: I1216 03:29:26.798894 2812 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:29:26.799199 kubelet[2812]: I1216 03:29:26.798938 2812 server.go:1287] "Started kubelet" Dec 16 03:29:26.805876 kubelet[2812]: I1216 03:29:26.805821 2812 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:29:26.806560 kubelet[2812]: I1216 03:29:26.806354 2812 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:29:26.808203 kubelet[2812]: I1216 03:29:26.806830 2812 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:29:26.813618 kubelet[2812]: I1216 03:29:26.812778 2812 server.go:479] "Adding debug handlers to kubelet server" Dec 16 03:29:26.813618 kubelet[2812]: E1216 03:29:26.809330 2812 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.117:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.117:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-117.18819476c89fcf5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-117,UID:ip-172-31-30-117,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-117,},FirstTimestamp:2025-12-16 03:29:26.798913372 +0000 UTC m=+0.642172051,LastTimestamp:2025-12-16 03:29:26.798913372 +0000 UTC m=+0.642172051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-117,}" Dec 16 03:29:26.818230 kubelet[2812]: I1216 03:29:26.818198 2812 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:29:26.818806 kubelet[2812]: I1216 03:29:26.818786 2812 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:29:26.822358 kubelet[2812]: E1216 03:29:26.822322 2812 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-117\" not found" Dec 16 03:29:26.822552 kubelet[2812]: I1216 03:29:26.822541 2812 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:29:26.822911 kubelet[2812]: I1216 03:29:26.822891 2812 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:29:26.823077 kubelet[2812]: I1216 03:29:26.823068 2812 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:29:26.823665 kubelet[2812]: W1216 03:29:26.823618 2812 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.117:6443: connect: connection refused Dec 16 03:29:26.823802 kubelet[2812]: E1216 03:29:26.823779 2812 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.117:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:29:26.824164 kubelet[2812]: E1216 03:29:26.824127 2812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-117?timeout=10s\": dial tcp 172.31.30.117:6443: connect: connection refused" interval="200ms" Dec 16 03:29:26.829053 kubelet[2812]: I1216 03:29:26.828775 2812 factory.go:221] Registration of the systemd container factory successfully Dec 16 03:29:26.829053 kubelet[2812]: I1216 03:29:26.828886 2812 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:29:26.832000 audit[2824]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2824 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:26.832000 audit[2824]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd8b8f0e60 a2=0 a3=0 items=0 ppid=2812 pid=2824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:26.832000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:29:26.834000 audit[2825]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2825 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:26.834000 audit[2825]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7d0576f0 a2=0 a3=0 items=0 ppid=2812 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:26.834000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:29:26.837000 audit[2827]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2827 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:26.837000 audit[2827]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdfb3e16e0 a2=0 a3=0 items=0 ppid=2812 pid=2827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:26.837000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:29:26.841532 kubelet[2812]: E1216 03:29:26.840501 2812 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:29:26.841532 kubelet[2812]: I1216 03:29:26.840770 2812 factory.go:221] Registration of the containerd container factory successfully Dec 16 03:29:26.841000 audit[2829]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2829 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:26.841000 audit[2829]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdda417c20 a2=0 a3=0 items=0 ppid=2812 pid=2829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:26.841000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:29:26.851000 audit[2832]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2832 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:26.851000 audit[2832]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffff804deb0 a2=0 a3=0 items=0 ppid=2812 pid=2832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:26.851000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 03:29:26.854741 kubelet[2812]: I1216 03:29:26.854350 2812 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 03:29:26.854000 audit[2833]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2833 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:26.854000 audit[2833]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffccc07b9e0 a2=0 a3=0 items=0 ppid=2812 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:26.854000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:29:26.857222 kubelet[2812]: I1216 03:29:26.856650 2812 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 03:29:26.857222 kubelet[2812]: I1216 03:29:26.856683 2812 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 03:29:26.857222 kubelet[2812]: I1216 03:29:26.856711 2812 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:29:26.857222 kubelet[2812]: I1216 03:29:26.856725 2812 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 03:29:26.857222 kubelet[2812]: E1216 03:29:26.856790 2812 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:29:26.856000 audit[2834]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2834 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:26.856000 audit[2834]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8af7ef00 a2=0 a3=0 items=0 ppid=2812 pid=2834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:26.856000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:29:26.858000 audit[2835]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2835 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:26.858000 audit[2835]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb5c62d70 a2=0 a3=0 items=0 ppid=2812 pid=2835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:26.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:29:26.860000 audit[2836]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:26.860000 audit[2836]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8edb4c90 a2=0 a3=0 items=0 ppid=2812 pid=2836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:26.860000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:29:26.862000 audit[2837]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2837 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:26.862000 audit[2837]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd4c986ab0 a2=0 a3=0 items=0 ppid=2812 pid=2837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:26.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:29:26.864000 audit[2839]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2839 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:26.864000 audit[2839]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2bb9ca30 a2=0 a3=0 items=0 ppid=2812 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:26.864000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:29:26.865000 audit[2840]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2840 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:26.865000 audit[2840]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7ea52640 a2=0 a3=0 items=0 ppid=2812 pid=2840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:26.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:29:26.867703 kubelet[2812]: W1216 03:29:26.867653 2812 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.117:6443: connect: connection refused Dec 16 03:29:26.867801 kubelet[2812]: E1216 03:29:26.867716 2812 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.117:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:29:26.877622 kubelet[2812]: I1216 03:29:26.877582 2812 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:29:26.877784 kubelet[2812]: I1216 03:29:26.877764 2812 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:29:26.877853 kubelet[2812]: I1216 03:29:26.877791 2812 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:29:26.883944 kubelet[2812]: I1216 03:29:26.883829 2812 policy_none.go:49] "None policy: Start" Dec 16 03:29:26.883944 kubelet[2812]: I1216 03:29:26.883863 2812 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:29:26.883944 kubelet[2812]: I1216 03:29:26.883881 2812 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:29:26.894734 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 03:29:26.906045 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 03:29:26.910952 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 03:29:26.921103 kubelet[2812]: I1216 03:29:26.921040 2812 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 03:29:26.922749 kubelet[2812]: I1216 03:29:26.922388 2812 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:29:26.922749 kubelet[2812]: I1216 03:29:26.922423 2812 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:29:26.922749 kubelet[2812]: I1216 03:29:26.922701 2812 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:29:26.926623 kubelet[2812]: E1216 03:29:26.926365 2812 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:29:26.926623 kubelet[2812]: E1216 03:29:26.926427 2812 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-117\" not found" Dec 16 03:29:26.990356 systemd[1]: Created slice kubepods-burstable-pod7b7e859cf9ad71fada470d7fc1331126.slice - libcontainer container kubepods-burstable-pod7b7e859cf9ad71fada470d7fc1331126.slice. Dec 16 03:29:27.003750 kubelet[2812]: E1216 03:29:27.003692 2812 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-117\" not found" node="ip-172-31-30-117" Dec 16 03:29:27.008064 systemd[1]: Created slice kubepods-burstable-pod45bc326728e90a4a1c307c08e1e7a5f0.slice - libcontainer container kubepods-burstable-pod45bc326728e90a4a1c307c08e1e7a5f0.slice. Dec 16 03:29:27.027005 kubelet[2812]: E1216 03:29:27.025524 2812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-117?timeout=10s\": dial tcp 172.31.30.117:6443: connect: connection refused" interval="400ms" Dec 16 03:29:27.027005 kubelet[2812]: I1216 03:29:27.025602 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b7e859cf9ad71fada470d7fc1331126-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-117\" (UID: \"7b7e859cf9ad71fada470d7fc1331126\") " pod="kube-system/kube-controller-manager-ip-172-31-30-117" Dec 16 03:29:27.027005 kubelet[2812]: I1216 03:29:27.025622 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b7e859cf9ad71fada470d7fc1331126-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-117\" (UID: \"7b7e859cf9ad71fada470d7fc1331126\") " pod="kube-system/kube-controller-manager-ip-172-31-30-117" Dec 16 03:29:27.027005 kubelet[2812]: I1216 03:29:27.025638 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/201dd86aa37c96035a2e577e09337b19-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-117\" (UID: \"201dd86aa37c96035a2e577e09337b19\") " pod="kube-system/kube-scheduler-ip-172-31-30-117" Dec 16 03:29:27.027005 kubelet[2812]: I1216 03:29:27.025652 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b7e859cf9ad71fada470d7fc1331126-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-117\" (UID: \"7b7e859cf9ad71fada470d7fc1331126\") " pod="kube-system/kube-controller-manager-ip-172-31-30-117" Dec 16 03:29:27.027311 kubelet[2812]: I1216 03:29:27.025669 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b7e859cf9ad71fada470d7fc1331126-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-117\" (UID: \"7b7e859cf9ad71fada470d7fc1331126\") " pod="kube-system/kube-controller-manager-ip-172-31-30-117" Dec 16 03:29:27.027311 kubelet[2812]: I1216 03:29:27.025687 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b7e859cf9ad71fada470d7fc1331126-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-117\" (UID: \"7b7e859cf9ad71fada470d7fc1331126\") " pod="kube-system/kube-controller-manager-ip-172-31-30-117" Dec 16 03:29:27.027311 kubelet[2812]: I1216 03:29:27.025701 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45bc326728e90a4a1c307c08e1e7a5f0-ca-certs\") pod \"kube-apiserver-ip-172-31-30-117\" (UID: \"45bc326728e90a4a1c307c08e1e7a5f0\") " pod="kube-system/kube-apiserver-ip-172-31-30-117" Dec 16 03:29:27.027311 kubelet[2812]: I1216 03:29:27.025714 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45bc326728e90a4a1c307c08e1e7a5f0-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-117\" (UID: \"45bc326728e90a4a1c307c08e1e7a5f0\") " pod="kube-system/kube-apiserver-ip-172-31-30-117" Dec 16 03:29:27.027311 kubelet[2812]: I1216 03:29:27.025736 2812 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45bc326728e90a4a1c307c08e1e7a5f0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-117\" (UID: \"45bc326728e90a4a1c307c08e1e7a5f0\") " pod="kube-system/kube-apiserver-ip-172-31-30-117" Dec 16 03:29:27.028277 kubelet[2812]: I1216 03:29:27.027870 2812 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-117" Dec 16 03:29:27.029313 kubelet[2812]: E1216 03:29:27.028824 2812 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.117:6443/api/v1/nodes\": dial tcp 172.31.30.117:6443: connect: connection refused" node="ip-172-31-30-117" Dec 16 03:29:27.030197 kubelet[2812]: E1216 03:29:27.029917 2812 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-117\" not found" node="ip-172-31-30-117" Dec 16 03:29:27.033594 systemd[1]: Created slice kubepods-burstable-pod201dd86aa37c96035a2e577e09337b19.slice - libcontainer container kubepods-burstable-pod201dd86aa37c96035a2e577e09337b19.slice. Dec 16 03:29:27.036063 kubelet[2812]: E1216 03:29:27.036030 2812 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-117\" not found" node="ip-172-31-30-117" Dec 16 03:29:27.050623 kubelet[2812]: E1216 03:29:27.050520 2812 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.117:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.117:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-117.18819476c89fcf5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-117,UID:ip-172-31-30-117,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-117,},FirstTimestamp:2025-12-16 03:29:26.798913372 +0000 UTC m=+0.642172051,LastTimestamp:2025-12-16 03:29:26.798913372 +0000 UTC m=+0.642172051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-117,}" Dec 16 03:29:27.231698 kubelet[2812]: I1216 03:29:27.231669 2812 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-117" Dec 16 03:29:27.232302 kubelet[2812]: E1216 03:29:27.232055 2812 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.117:6443/api/v1/nodes\": dial tcp 172.31.30.117:6443: connect: connection refused" node="ip-172-31-30-117" Dec 16 03:29:27.309252 containerd[1851]: time="2025-12-16T03:29:27.309205303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-117,Uid:7b7e859cf9ad71fada470d7fc1331126,Namespace:kube-system,Attempt:0,}" Dec 16 03:29:27.331385 containerd[1851]: time="2025-12-16T03:29:27.331337668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-117,Uid:45bc326728e90a4a1c307c08e1e7a5f0,Namespace:kube-system,Attempt:0,}" Dec 16 03:29:27.337717 containerd[1851]: time="2025-12-16T03:29:27.337621580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-117,Uid:201dd86aa37c96035a2e577e09337b19,Namespace:kube-system,Attempt:0,}" Dec 16 03:29:27.426437 kubelet[2812]: E1216 03:29:27.426391 2812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-117?timeout=10s\": dial tcp 172.31.30.117:6443: connect: connection refused" interval="800ms" Dec 16 03:29:27.465197 containerd[1851]: time="2025-12-16T03:29:27.464161023Z" level=info msg="connecting to shim 3aa4d072a8236059de9b2d205b775df5bd3cc739dcd7e7e4e30b64689586054b" address="unix:///run/containerd/s/dafce5b9793a50a34e44a4b8a2b625ec389d38fd61b2750118c5f25d4bcabb80" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:29:27.475399 containerd[1851]: time="2025-12-16T03:29:27.475350244Z" level=info msg="connecting to shim 5e02ac517055bd7876da8b8a04ce0ffefdab2d35b887c11b2781867c92d2cdee" address="unix:///run/containerd/s/85392e2c4e110945446e0895f0647c3ebdd7f31ed71aeca4118f536fa72a3773" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:29:27.476244 containerd[1851]: time="2025-12-16T03:29:27.476210326Z" level=info msg="connecting to shim 4bb68c9a2a4d5431be045979c0ca127ed2a7f9bf3fcca716fff9e21590bc2820" address="unix:///run/containerd/s/03f9ab29871e0e9600dcdbe26a5c595610e1c45f7e4bfbd288a58958584777e0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:29:27.602466 systemd[1]: Started cri-containerd-3aa4d072a8236059de9b2d205b775df5bd3cc739dcd7e7e4e30b64689586054b.scope - libcontainer container 3aa4d072a8236059de9b2d205b775df5bd3cc739dcd7e7e4e30b64689586054b. Dec 16 03:29:27.606008 systemd[1]: Started cri-containerd-4bb68c9a2a4d5431be045979c0ca127ed2a7f9bf3fcca716fff9e21590bc2820.scope - libcontainer container 4bb68c9a2a4d5431be045979c0ca127ed2a7f9bf3fcca716fff9e21590bc2820. Dec 16 03:29:27.609630 systemd[1]: Started cri-containerd-5e02ac517055bd7876da8b8a04ce0ffefdab2d35b887c11b2781867c92d2cdee.scope - libcontainer container 5e02ac517055bd7876da8b8a04ce0ffefdab2d35b887c11b2781867c92d2cdee. Dec 16 03:29:27.636461 kubelet[2812]: I1216 03:29:27.636391 2812 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-117" Dec 16 03:29:27.635000 audit: BPF prog-id=93 op=LOAD Dec 16 03:29:27.637327 kubelet[2812]: E1216 03:29:27.637293 2812 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.117:6443/api/v1/nodes\": dial tcp 172.31.30.117:6443: connect: connection refused" node="ip-172-31-30-117" Dec 16 03:29:27.638000 audit: BPF prog-id=94 op=LOAD Dec 16 03:29:27.638000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2874 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303261633531373035356264373837366461386238613034636530 Dec 16 03:29:27.638000 audit: BPF prog-id=94 op=UNLOAD Dec 16 03:29:27.638000 audit[2901]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303261633531373035356264373837366461386238613034636530 Dec 16 03:29:27.643000 audit: BPF prog-id=95 op=LOAD Dec 16 03:29:27.644000 audit: BPF prog-id=96 op=LOAD Dec 16 03:29:27.644000 audit: BPF prog-id=97 op=LOAD Dec 16 03:29:27.644000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2861 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613464303732613832333630353964653962326432303562373735 Dec 16 03:29:27.644000 audit: BPF prog-id=96 op=UNLOAD Dec 16 03:29:27.644000 audit[2896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613464303732613832333630353964653962326432303562373735 Dec 16 03:29:27.644000 audit: BPF prog-id=98 op=LOAD Dec 16 03:29:27.644000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2861 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613464303732613832333630353964653962326432303562373735 Dec 16 03:29:27.644000 audit: BPF prog-id=99 op=LOAD Dec 16 03:29:27.644000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2861 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613464303732613832333630353964653962326432303562373735 Dec 16 03:29:27.644000 audit: BPF prog-id=99 op=UNLOAD Dec 16 03:29:27.644000 audit[2896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613464303732613832333630353964653962326432303562373735 Dec 16 03:29:27.644000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2874 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.644000 audit: BPF prog-id=98 op=UNLOAD Dec 16 03:29:27.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303261633531373035356264373837366461386238613034636530 Dec 16 03:29:27.644000 audit[2896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613464303732613832333630353964653962326432303562373735 Dec 16 03:29:27.645000 audit: BPF prog-id=100 op=LOAD Dec 16 03:29:27.645000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2861 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361613464303732613832333630353964653962326432303562373735 Dec 16 03:29:27.645000 audit: BPF prog-id=101 op=LOAD Dec 16 03:29:27.645000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2874 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303261633531373035356264373837366461386238613034636530 Dec 16 03:29:27.647000 audit: BPF prog-id=101 op=UNLOAD Dec 16 03:29:27.647000 audit[2901]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303261633531373035356264373837366461386238613034636530 Dec 16 03:29:27.647000 audit: BPF prog-id=97 op=UNLOAD Dec 16 03:29:27.647000 audit[2901]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303261633531373035356264373837366461386238613034636530 Dec 16 03:29:27.647000 audit: BPF prog-id=102 op=LOAD Dec 16 03:29:27.647000 audit[2901]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2874 pid=2901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303261633531373035356264373837366461386238613034636530 Dec 16 03:29:27.650016 kubelet[2812]: W1216 03:29:27.649855 2812 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.117:6443: connect: connection refused Dec 16 03:29:27.650016 kubelet[2812]: E1216 03:29:27.649900 2812 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.117:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:29:27.652000 audit: BPF prog-id=103 op=LOAD Dec 16 03:29:27.653000 audit: BPF prog-id=104 op=LOAD Dec 16 03:29:27.653000 audit[2905]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2877 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462623638633961326134643534333162653034353937396330636131 Dec 16 03:29:27.653000 audit: BPF prog-id=104 op=UNLOAD Dec 16 03:29:27.653000 audit[2905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462623638633961326134643534333162653034353937396330636131 Dec 16 03:29:27.654000 audit: BPF prog-id=105 op=LOAD Dec 16 03:29:27.654000 audit[2905]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2877 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462623638633961326134643534333162653034353937396330636131 Dec 16 03:29:27.654000 audit: BPF prog-id=106 op=LOAD Dec 16 03:29:27.654000 audit[2905]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2877 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462623638633961326134643534333162653034353937396330636131 Dec 16 03:29:27.654000 audit: BPF prog-id=106 op=UNLOAD Dec 16 03:29:27.654000 audit[2905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462623638633961326134643534333162653034353937396330636131 Dec 16 03:29:27.654000 audit: BPF prog-id=105 op=UNLOAD Dec 16 03:29:27.654000 audit[2905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462623638633961326134643534333162653034353937396330636131 Dec 16 03:29:27.654000 audit: BPF prog-id=107 op=LOAD Dec 16 03:29:27.654000 audit[2905]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2877 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462623638633961326134643534333162653034353937396330636131 Dec 16 03:29:27.709781 containerd[1851]: time="2025-12-16T03:29:27.709527180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-117,Uid:7b7e859cf9ad71fada470d7fc1331126,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bb68c9a2a4d5431be045979c0ca127ed2a7f9bf3fcca716fff9e21590bc2820\"" Dec 16 03:29:27.735601 containerd[1851]: time="2025-12-16T03:29:27.735541913Z" level=info msg="CreateContainer within sandbox \"4bb68c9a2a4d5431be045979c0ca127ed2a7f9bf3fcca716fff9e21590bc2820\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 03:29:27.737152 kubelet[2812]: W1216 03:29:27.736584 2812 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.117:6443: connect: connection refused Dec 16 03:29:27.737152 kubelet[2812]: E1216 03:29:27.736666 2812 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.117:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:29:27.763655 containerd[1851]: time="2025-12-16T03:29:27.763605979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-117,Uid:45bc326728e90a4a1c307c08e1e7a5f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3aa4d072a8236059de9b2d205b775df5bd3cc739dcd7e7e4e30b64689586054b\"" Dec 16 03:29:27.766852 containerd[1851]: time="2025-12-16T03:29:27.766801163Z" level=info msg="CreateContainer within sandbox \"3aa4d072a8236059de9b2d205b775df5bd3cc739dcd7e7e4e30b64689586054b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 03:29:27.767130 containerd[1851]: time="2025-12-16T03:29:27.766937488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-117,Uid:201dd86aa37c96035a2e577e09337b19,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e02ac517055bd7876da8b8a04ce0ffefdab2d35b887c11b2781867c92d2cdee\"" Dec 16 03:29:27.770031 containerd[1851]: time="2025-12-16T03:29:27.769913510Z" level=info msg="CreateContainer within sandbox \"5e02ac517055bd7876da8b8a04ce0ffefdab2d35b887c11b2781867c92d2cdee\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 03:29:27.788566 containerd[1851]: time="2025-12-16T03:29:27.786862376Z" level=info msg="Container 3c85e94b6817c24fb53169144aa8695d1934d54cc7851c0e648a975c8c7cb1d7: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:29:27.788899 containerd[1851]: time="2025-12-16T03:29:27.788870278Z" level=info msg="Container 6a7a689d8f51387fa9685b0d5a92a352f852b54e4bfddb995216ec3c99a6fe6f: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:29:27.794302 containerd[1851]: time="2025-12-16T03:29:27.794254480Z" level=info msg="Container 8975df7704c92e3764f92d66ac0bb04bba34f7ec7c8e828b70435395fd5602c4: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:29:27.806306 containerd[1851]: time="2025-12-16T03:29:27.805767077Z" level=info msg="CreateContainer within sandbox \"4bb68c9a2a4d5431be045979c0ca127ed2a7f9bf3fcca716fff9e21590bc2820\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3c85e94b6817c24fb53169144aa8695d1934d54cc7851c0e648a975c8c7cb1d7\"" Dec 16 03:29:27.809070 containerd[1851]: time="2025-12-16T03:29:27.809016948Z" level=info msg="StartContainer for \"3c85e94b6817c24fb53169144aa8695d1934d54cc7851c0e648a975c8c7cb1d7\"" Dec 16 03:29:27.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:27.810650 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 03:29:27.815969 containerd[1851]: time="2025-12-16T03:29:27.815877987Z" level=info msg="connecting to shim 3c85e94b6817c24fb53169144aa8695d1934d54cc7851c0e648a975c8c7cb1d7" address="unix:///run/containerd/s/03f9ab29871e0e9600dcdbe26a5c595610e1c45f7e4bfbd288a58958584777e0" protocol=ttrpc version=3 Dec 16 03:29:27.822815 containerd[1851]: time="2025-12-16T03:29:27.822764928Z" level=info msg="CreateContainer within sandbox \"3aa4d072a8236059de9b2d205b775df5bd3cc739dcd7e7e4e30b64689586054b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a7a689d8f51387fa9685b0d5a92a352f852b54e4bfddb995216ec3c99a6fe6f\"" Dec 16 03:29:27.825000 audit: BPF prog-id=83 op=UNLOAD Dec 16 03:29:27.826532 containerd[1851]: time="2025-12-16T03:29:27.826233746Z" level=info msg="StartContainer for \"6a7a689d8f51387fa9685b0d5a92a352f852b54e4bfddb995216ec3c99a6fe6f\"" Dec 16 03:29:27.833878 containerd[1851]: time="2025-12-16T03:29:27.833828457Z" level=info msg="CreateContainer within sandbox \"5e02ac517055bd7876da8b8a04ce0ffefdab2d35b887c11b2781867c92d2cdee\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8975df7704c92e3764f92d66ac0bb04bba34f7ec7c8e828b70435395fd5602c4\"" Dec 16 03:29:27.835702 containerd[1851]: time="2025-12-16T03:29:27.835649183Z" level=info msg="connecting to shim 6a7a689d8f51387fa9685b0d5a92a352f852b54e4bfddb995216ec3c99a6fe6f" address="unix:///run/containerd/s/dafce5b9793a50a34e44a4b8a2b625ec389d38fd61b2750118c5f25d4bcabb80" protocol=ttrpc version=3 Dec 16 03:29:27.839812 containerd[1851]: time="2025-12-16T03:29:27.839775454Z" level=info msg="StartContainer for \"8975df7704c92e3764f92d66ac0bb04bba34f7ec7c8e828b70435395fd5602c4\"" Dec 16 03:29:27.841883 containerd[1851]: time="2025-12-16T03:29:27.841845227Z" level=info msg="connecting to shim 8975df7704c92e3764f92d66ac0bb04bba34f7ec7c8e828b70435395fd5602c4" address="unix:///run/containerd/s/85392e2c4e110945446e0895f0647c3ebdd7f31ed71aeca4118f536fa72a3773" protocol=ttrpc version=3 Dec 16 03:29:27.847539 systemd[1]: Started cri-containerd-3c85e94b6817c24fb53169144aa8695d1934d54cc7851c0e648a975c8c7cb1d7.scope - libcontainer container 3c85e94b6817c24fb53169144aa8695d1934d54cc7851c0e648a975c8c7cb1d7. Dec 16 03:29:27.887518 systemd[1]: Started cri-containerd-6a7a689d8f51387fa9685b0d5a92a352f852b54e4bfddb995216ec3c99a6fe6f.scope - libcontainer container 6a7a689d8f51387fa9685b0d5a92a352f852b54e4bfddb995216ec3c99a6fe6f. Dec 16 03:29:27.894000 audit: BPF prog-id=108 op=LOAD Dec 16 03:29:27.895000 audit: BPF prog-id=109 op=LOAD Dec 16 03:29:27.895000 audit[2993]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=2877 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363383565393462363831376332346662353331363931343461613836 Dec 16 03:29:27.897000 audit: BPF prog-id=109 op=UNLOAD Dec 16 03:29:27.897000 audit[2993]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.897000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363383565393462363831376332346662353331363931343461613836 Dec 16 03:29:27.898000 audit: BPF prog-id=110 op=LOAD Dec 16 03:29:27.898000 audit[2993]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=2877 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.898000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363383565393462363831376332346662353331363931343461613836 Dec 16 03:29:27.899000 audit: BPF prog-id=111 op=LOAD Dec 16 03:29:27.899000 audit[2993]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=2877 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.899000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363383565393462363831376332346662353331363931343461613836 Dec 16 03:29:27.900000 audit: BPF prog-id=111 op=UNLOAD Dec 16 03:29:27.900000 audit[2993]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363383565393462363831376332346662353331363931343461613836 Dec 16 03:29:27.901000 audit: BPF prog-id=110 op=UNLOAD Dec 16 03:29:27.901000 audit[2993]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363383565393462363831376332346662353331363931343461613836 Dec 16 03:29:27.901000 audit: BPF prog-id=112 op=LOAD Dec 16 03:29:27.901000 audit[2993]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=2877 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363383565393462363831376332346662353331363931343461613836 Dec 16 03:29:27.906559 systemd[1]: Started cri-containerd-8975df7704c92e3764f92d66ac0bb04bba34f7ec7c8e828b70435395fd5602c4.scope - libcontainer container 8975df7704c92e3764f92d66ac0bb04bba34f7ec7c8e828b70435395fd5602c4. Dec 16 03:29:27.929000 audit: BPF prog-id=113 op=LOAD Dec 16 03:29:27.930000 audit: BPF prog-id=114 op=LOAD Dec 16 03:29:27.930000 audit[3005]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2861 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661376136383964386635313338376661393638356230643561393261 Dec 16 03:29:27.930000 audit: BPF prog-id=114 op=UNLOAD Dec 16 03:29:27.930000 audit[3005]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661376136383964386635313338376661393638356230643561393261 Dec 16 03:29:27.930000 audit: BPF prog-id=115 op=LOAD Dec 16 03:29:27.930000 audit[3005]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2861 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661376136383964386635313338376661393638356230643561393261 Dec 16 03:29:27.930000 audit: BPF prog-id=116 op=LOAD Dec 16 03:29:27.930000 audit[3005]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2861 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661376136383964386635313338376661393638356230643561393261 Dec 16 03:29:27.930000 audit: BPF prog-id=116 op=UNLOAD Dec 16 03:29:27.930000 audit[3005]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661376136383964386635313338376661393638356230643561393261 Dec 16 03:29:27.930000 audit: BPF prog-id=115 op=UNLOAD Dec 16 03:29:27.930000 audit[3005]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2861 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661376136383964386635313338376661393638356230643561393261 Dec 16 03:29:27.930000 audit: BPF prog-id=117 op=LOAD Dec 16 03:29:27.930000 audit[3005]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2861 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661376136383964386635313338376661393638356230643561393261 Dec 16 03:29:27.962000 audit: BPF prog-id=118 op=LOAD Dec 16 03:29:27.965000 audit: BPF prog-id=119 op=LOAD Dec 16 03:29:27.965000 audit[3006]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2874 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839373564663737303463393265333736346639326436366163306262 Dec 16 03:29:27.965000 audit: BPF prog-id=119 op=UNLOAD Dec 16 03:29:27.965000 audit[3006]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839373564663737303463393265333736346639326436366163306262 Dec 16 03:29:27.967000 audit: BPF prog-id=120 op=LOAD Dec 16 03:29:27.967000 audit[3006]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2874 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839373564663737303463393265333736346639326436366163306262 Dec 16 03:29:27.967000 audit: BPF prog-id=121 op=LOAD Dec 16 03:29:27.967000 audit[3006]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2874 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839373564663737303463393265333736346639326436366163306262 Dec 16 03:29:27.967000 audit: BPF prog-id=121 op=UNLOAD Dec 16 03:29:27.967000 audit[3006]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839373564663737303463393265333736346639326436366163306262 Dec 16 03:29:27.967000 audit: BPF prog-id=120 op=UNLOAD Dec 16 03:29:27.967000 audit[3006]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839373564663737303463393265333736346639326436366163306262 Dec 16 03:29:27.967000 audit: BPF prog-id=122 op=LOAD Dec 16 03:29:27.967000 audit[3006]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2874 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:27.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839373564663737303463393265333736346639326436366163306262 Dec 16 03:29:28.007545 containerd[1851]: time="2025-12-16T03:29:28.007492690Z" level=info msg="StartContainer for \"3c85e94b6817c24fb53169144aa8695d1934d54cc7851c0e648a975c8c7cb1d7\" returns successfully" Dec 16 03:29:28.027887 containerd[1851]: time="2025-12-16T03:29:28.027777288Z" level=info msg="StartContainer for \"6a7a689d8f51387fa9685b0d5a92a352f852b54e4bfddb995216ec3c99a6fe6f\" returns successfully" Dec 16 03:29:28.043219 containerd[1851]: time="2025-12-16T03:29:28.043181612Z" level=info msg="StartContainer for \"8975df7704c92e3764f92d66ac0bb04bba34f7ec7c8e828b70435395fd5602c4\" returns successfully" Dec 16 03:29:28.227182 kubelet[2812]: E1216 03:29:28.227118 2812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-117?timeout=10s\": dial tcp 172.31.30.117:6443: connect: connection refused" interval="1.6s" Dec 16 03:29:28.265389 kubelet[2812]: W1216 03:29:28.265322 2812 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-117&limit=500&resourceVersion=0": dial tcp 172.31.30.117:6443: connect: connection refused Dec 16 03:29:28.265389 kubelet[2812]: E1216 03:29:28.265399 2812 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-117&limit=500&resourceVersion=0\": dial tcp 172.31.30.117:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:29:28.271143 kubelet[2812]: W1216 03:29:28.271081 2812 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.117:6443: connect: connection refused Dec 16 03:29:28.271143 kubelet[2812]: E1216 03:29:28.271154 2812 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.117:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:29:28.439563 kubelet[2812]: I1216 03:29:28.439532 2812 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-117" Dec 16 03:29:28.441549 kubelet[2812]: E1216 03:29:28.441509 2812 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.117:6443/api/v1/nodes\": dial tcp 172.31.30.117:6443: connect: connection refused" node="ip-172-31-30-117" Dec 16 03:29:28.742592 kubelet[2812]: E1216 03:29:28.742543 2812 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.117:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:29:28.917724 kubelet[2812]: E1216 03:29:28.916714 2812 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-117\" not found" node="ip-172-31-30-117" Dec 16 03:29:28.922572 kubelet[2812]: E1216 03:29:28.921998 2812 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-117\" not found" node="ip-172-31-30-117" Dec 16 03:29:28.928935 kubelet[2812]: E1216 03:29:28.928908 2812 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-117\" not found" node="ip-172-31-30-117" Dec 16 03:29:29.930747 kubelet[2812]: E1216 03:29:29.930713 2812 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-117\" not found" node="ip-172-31-30-117" Dec 16 03:29:29.932687 kubelet[2812]: E1216 03:29:29.931903 2812 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-117\" not found" node="ip-172-31-30-117" Dec 16 03:29:29.933340 kubelet[2812]: E1216 03:29:29.933313 2812 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-117\" not found" node="ip-172-31-30-117" Dec 16 03:29:30.045252 kubelet[2812]: I1216 03:29:30.045221 2812 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-117" Dec 16 03:29:30.933865 kubelet[2812]: E1216 03:29:30.933830 2812 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-117\" not found" node="ip-172-31-30-117" Dec 16 03:29:30.937218 kubelet[2812]: E1216 03:29:30.936045 2812 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-117\" not found" node="ip-172-31-30-117" Dec 16 03:29:30.937218 kubelet[2812]: E1216 03:29:30.937143 2812 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-117\" not found" node="ip-172-31-30-117" Dec 16 03:29:31.913007 kubelet[2812]: E1216 03:29:31.912965 2812 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-117\" not found" node="ip-172-31-30-117" Dec 16 03:29:31.935951 kubelet[2812]: E1216 03:29:31.935913 2812 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-117\" not found" node="ip-172-31-30-117" Dec 16 03:29:31.990120 kubelet[2812]: I1216 03:29:31.989795 2812 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-117" Dec 16 03:29:31.990120 kubelet[2812]: E1216 03:29:31.989840 2812 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-30-117\": node \"ip-172-31-30-117\" not found" Dec 16 03:29:32.025101 kubelet[2812]: I1216 03:29:32.025055 2812 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-117" Dec 16 03:29:32.040380 kubelet[2812]: E1216 03:29:32.040316 2812 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-117\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-30-117" Dec 16 03:29:32.040676 kubelet[2812]: I1216 03:29:32.040359 2812 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-117" Dec 16 03:29:32.042811 kubelet[2812]: E1216 03:29:32.042767 2812 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-117\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-30-117" Dec 16 03:29:32.043120 kubelet[2812]: I1216 03:29:32.042988 2812 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-117" Dec 16 03:29:32.045046 kubelet[2812]: E1216 03:29:32.045018 2812 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-117\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-117" Dec 16 03:29:32.780460 kubelet[2812]: I1216 03:29:32.780243 2812 apiserver.go:52] "Watching apiserver" Dec 16 03:29:32.824101 kubelet[2812]: I1216 03:29:32.824066 2812 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:29:34.118088 systemd[1]: Reload requested from client PID 3093 ('systemctl') (unit session-8.scope)... Dec 16 03:29:34.118111 systemd[1]: Reloading... Dec 16 03:29:34.262211 zram_generator::config[3143]: No configuration found. Dec 16 03:29:34.544228 systemd[1]: Reloading finished in 425 ms. Dec 16 03:29:34.591065 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:29:34.606245 kernel: kauditd_printk_skb: 212 callbacks suppressed Dec 16 03:29:34.606344 kernel: audit: type=1131 audit(1765855774.604:414): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:34.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:34.604892 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 03:29:34.605153 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:29:34.605246 systemd[1]: kubelet.service: Consumed 1.082s CPU time, 129.6M memory peak. Dec 16 03:29:34.610517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:29:34.610000 audit: BPF prog-id=123 op=LOAD Dec 16 03:29:34.614207 kernel: audit: type=1334 audit(1765855774.610:415): prog-id=123 op=LOAD Dec 16 03:29:34.614281 kernel: audit: type=1334 audit(1765855774.610:416): prog-id=124 op=LOAD Dec 16 03:29:34.610000 audit: BPF prog-id=124 op=LOAD Dec 16 03:29:34.610000 audit: BPF prog-id=70 op=UNLOAD Dec 16 03:29:34.610000 audit: BPF prog-id=71 op=UNLOAD Dec 16 03:29:34.620832 kernel: audit: type=1334 audit(1765855774.610:417): prog-id=70 op=UNLOAD Dec 16 03:29:34.620899 kernel: audit: type=1334 audit(1765855774.610:418): prog-id=71 op=UNLOAD Dec 16 03:29:34.620921 kernel: audit: type=1334 audit(1765855774.613:419): prog-id=125 op=LOAD Dec 16 03:29:34.613000 audit: BPF prog-id=125 op=LOAD Dec 16 03:29:34.613000 audit: BPF prog-id=89 op=UNLOAD Dec 16 03:29:34.622860 kernel: audit: type=1334 audit(1765855774.613:420): prog-id=89 op=UNLOAD Dec 16 03:29:34.622941 kernel: audit: type=1334 audit(1765855774.615:421): prog-id=126 op=LOAD Dec 16 03:29:34.615000 audit: BPF prog-id=126 op=LOAD Dec 16 03:29:34.615000 audit: BPF prog-id=80 op=UNLOAD Dec 16 03:29:34.624258 kernel: audit: type=1334 audit(1765855774.615:422): prog-id=80 op=UNLOAD Dec 16 03:29:34.615000 audit: BPF prog-id=127 op=LOAD Dec 16 03:29:34.615000 audit: BPF prog-id=128 op=LOAD Dec 16 03:29:34.615000 audit: BPF prog-id=81 op=UNLOAD Dec 16 03:29:34.615000 audit: BPF prog-id=82 op=UNLOAD Dec 16 03:29:34.616000 audit: BPF prog-id=129 op=LOAD Dec 16 03:29:34.616000 audit: BPF prog-id=86 op=UNLOAD Dec 16 03:29:34.616000 audit: BPF prog-id=130 op=LOAD Dec 16 03:29:34.616000 audit: BPF prog-id=131 op=LOAD Dec 16 03:29:34.616000 audit: BPF prog-id=87 op=UNLOAD Dec 16 03:29:34.616000 audit: BPF prog-id=88 op=UNLOAD Dec 16 03:29:34.617000 audit: BPF prog-id=132 op=LOAD Dec 16 03:29:34.617000 audit: BPF prog-id=76 op=UNLOAD Dec 16 03:29:34.617000 audit: BPF prog-id=133 op=LOAD Dec 16 03:29:34.617000 audit: BPF prog-id=72 op=UNLOAD Dec 16 03:29:34.619000 audit: BPF prog-id=134 op=LOAD Dec 16 03:29:34.619000 audit: BPF prog-id=90 op=UNLOAD Dec 16 03:29:34.619000 audit: BPF prog-id=135 op=LOAD Dec 16 03:29:34.619000 audit: BPF prog-id=136 op=LOAD Dec 16 03:29:34.619000 audit: BPF prog-id=91 op=UNLOAD Dec 16 03:29:34.619000 audit: BPF prog-id=92 op=UNLOAD Dec 16 03:29:34.619000 audit: BPF prog-id=137 op=LOAD Dec 16 03:29:34.627204 kernel: audit: type=1334 audit(1765855774.615:423): prog-id=127 op=LOAD Dec 16 03:29:34.623000 audit: BPF prog-id=77 op=UNLOAD Dec 16 03:29:34.624000 audit: BPF prog-id=138 op=LOAD Dec 16 03:29:34.624000 audit: BPF prog-id=139 op=LOAD Dec 16 03:29:34.624000 audit: BPF prog-id=78 op=UNLOAD Dec 16 03:29:34.624000 audit: BPF prog-id=79 op=UNLOAD Dec 16 03:29:34.624000 audit: BPF prog-id=140 op=LOAD Dec 16 03:29:34.624000 audit: BPF prog-id=73 op=UNLOAD Dec 16 03:29:34.624000 audit: BPF prog-id=141 op=LOAD Dec 16 03:29:34.624000 audit: BPF prog-id=142 op=LOAD Dec 16 03:29:34.624000 audit: BPF prog-id=74 op=UNLOAD Dec 16 03:29:34.624000 audit: BPF prog-id=75 op=UNLOAD Dec 16 03:29:35.144074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:29:35.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:35.164798 (kubelet)[3199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:29:35.232310 kubelet[3199]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:29:35.232927 kubelet[3199]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:29:35.232927 kubelet[3199]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:29:35.234803 kubelet[3199]: I1216 03:29:35.234681 3199 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:29:35.256233 kubelet[3199]: I1216 03:29:35.256187 3199 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 03:29:35.256233 kubelet[3199]: I1216 03:29:35.256239 3199 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:29:35.256792 kubelet[3199]: I1216 03:29:35.256764 3199 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 03:29:35.260784 kubelet[3199]: I1216 03:29:35.260742 3199 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 03:29:35.265583 kubelet[3199]: I1216 03:29:35.264595 3199 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:29:35.275448 kubelet[3199]: I1216 03:29:35.275389 3199 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:29:35.281025 kubelet[3199]: I1216 03:29:35.280974 3199 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:29:35.281437 kubelet[3199]: I1216 03:29:35.281359 3199 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:29:35.281763 kubelet[3199]: I1216 03:29:35.281405 3199 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-117","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:29:35.281903 kubelet[3199]: I1216 03:29:35.281781 3199 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:29:35.281903 kubelet[3199]: I1216 03:29:35.281795 3199 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 03:29:35.281903 kubelet[3199]: I1216 03:29:35.281868 3199 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:29:35.282064 kubelet[3199]: I1216 03:29:35.282050 3199 kubelet.go:446] "Attempting to sync node with API server" Dec 16 03:29:35.282117 kubelet[3199]: I1216 03:29:35.282087 3199 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:29:35.282161 kubelet[3199]: I1216 03:29:35.282123 3199 kubelet.go:352] "Adding apiserver pod source" Dec 16 03:29:35.282161 kubelet[3199]: I1216 03:29:35.282138 3199 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:29:35.285968 kubelet[3199]: I1216 03:29:35.284905 3199 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:29:35.285968 kubelet[3199]: I1216 03:29:35.285545 3199 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 03:29:35.286438 kubelet[3199]: I1216 03:29:35.286421 3199 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:29:35.286566 kubelet[3199]: I1216 03:29:35.286557 3199 server.go:1287] "Started kubelet" Dec 16 03:29:35.295321 kubelet[3199]: I1216 03:29:35.295219 3199 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:29:35.311535 kubelet[3199]: I1216 03:29:35.311504 3199 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:29:35.311683 kubelet[3199]: I1216 03:29:35.304483 3199 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:29:35.313559 kubelet[3199]: E1216 03:29:35.313531 3199 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:29:35.313858 kubelet[3199]: I1216 03:29:35.313846 3199 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:29:35.314016 kubelet[3199]: I1216 03:29:35.294940 3199 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:29:35.315323 kubelet[3199]: I1216 03:29:35.315302 3199 server.go:479] "Adding debug handlers to kubelet server" Dec 16 03:29:35.317693 kubelet[3199]: E1216 03:29:35.317665 3199 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-117\" not found" Dec 16 03:29:35.319625 kubelet[3199]: I1216 03:29:35.304684 3199 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:29:35.323762 kubelet[3199]: I1216 03:29:35.321367 3199 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:29:35.323907 kubelet[3199]: I1216 03:29:35.321500 3199 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:29:35.324015 kubelet[3199]: I1216 03:29:35.323430 3199 factory.go:221] Registration of the systemd container factory successfully Dec 16 03:29:35.324294 kubelet[3199]: I1216 03:29:35.324256 3199 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:29:35.330140 kubelet[3199]: I1216 03:29:35.330100 3199 factory.go:221] Registration of the containerd container factory successfully Dec 16 03:29:35.370115 kubelet[3199]: I1216 03:29:35.369932 3199 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 03:29:35.376883 kubelet[3199]: I1216 03:29:35.376848 3199 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 03:29:35.378373 kubelet[3199]: I1216 03:29:35.377588 3199 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 03:29:35.378373 kubelet[3199]: I1216 03:29:35.377621 3199 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:29:35.378373 kubelet[3199]: I1216 03:29:35.377630 3199 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 03:29:35.378373 kubelet[3199]: E1216 03:29:35.377687 3199 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:29:35.453135 kubelet[3199]: I1216 03:29:35.452628 3199 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:29:35.453135 kubelet[3199]: I1216 03:29:35.452651 3199 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:29:35.453135 kubelet[3199]: I1216 03:29:35.452672 3199 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:29:35.453135 kubelet[3199]: I1216 03:29:35.452854 3199 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 03:29:35.453135 kubelet[3199]: I1216 03:29:35.452865 3199 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 03:29:35.453135 kubelet[3199]: I1216 03:29:35.452883 3199 policy_none.go:49] "None policy: Start" Dec 16 03:29:35.453135 kubelet[3199]: I1216 03:29:35.452897 3199 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:29:35.453135 kubelet[3199]: I1216 03:29:35.452910 3199 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:29:35.453135 kubelet[3199]: I1216 03:29:35.453040 3199 state_mem.go:75] "Updated machine memory state" Dec 16 03:29:35.458605 kubelet[3199]: I1216 03:29:35.458570 3199 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 03:29:35.458793 kubelet[3199]: I1216 03:29:35.458772 3199 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:29:35.458856 kubelet[3199]: I1216 03:29:35.458794 3199 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:29:35.459799 kubelet[3199]: I1216 03:29:35.459669 3199 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:29:35.468473 kubelet[3199]: E1216 03:29:35.468438 3199 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:29:35.479946 kubelet[3199]: I1216 03:29:35.478597 3199 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-117" Dec 16 03:29:35.491864 kubelet[3199]: I1216 03:29:35.491809 3199 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-117" Dec 16 03:29:35.493439 kubelet[3199]: I1216 03:29:35.492444 3199 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-117" Dec 16 03:29:35.530280 kubelet[3199]: I1216 03:29:35.529976 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45bc326728e90a4a1c307c08e1e7a5f0-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-117\" (UID: \"45bc326728e90a4a1c307c08e1e7a5f0\") " pod="kube-system/kube-apiserver-ip-172-31-30-117" Dec 16 03:29:35.530280 kubelet[3199]: I1216 03:29:35.530074 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45bc326728e90a4a1c307c08e1e7a5f0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-117\" (UID: \"45bc326728e90a4a1c307c08e1e7a5f0\") " pod="kube-system/kube-apiserver-ip-172-31-30-117" Dec 16 03:29:35.530280 kubelet[3199]: I1216 03:29:35.530094 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b7e859cf9ad71fada470d7fc1331126-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-117\" (UID: \"7b7e859cf9ad71fada470d7fc1331126\") " pod="kube-system/kube-controller-manager-ip-172-31-30-117" Dec 16 03:29:35.530469 kubelet[3199]: I1216 03:29:35.530292 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b7e859cf9ad71fada470d7fc1331126-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-117\" (UID: \"7b7e859cf9ad71fada470d7fc1331126\") " pod="kube-system/kube-controller-manager-ip-172-31-30-117" Dec 16 03:29:35.530469 kubelet[3199]: I1216 03:29:35.530379 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b7e859cf9ad71fada470d7fc1331126-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-117\" (UID: \"7b7e859cf9ad71fada470d7fc1331126\") " pod="kube-system/kube-controller-manager-ip-172-31-30-117" Dec 16 03:29:35.530469 kubelet[3199]: I1216 03:29:35.530435 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45bc326728e90a4a1c307c08e1e7a5f0-ca-certs\") pod \"kube-apiserver-ip-172-31-30-117\" (UID: \"45bc326728e90a4a1c307c08e1e7a5f0\") " pod="kube-system/kube-apiserver-ip-172-31-30-117" Dec 16 03:29:35.530469 kubelet[3199]: I1216 03:29:35.530458 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b7e859cf9ad71fada470d7fc1331126-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-117\" (UID: \"7b7e859cf9ad71fada470d7fc1331126\") " pod="kube-system/kube-controller-manager-ip-172-31-30-117" Dec 16 03:29:35.530574 kubelet[3199]: I1216 03:29:35.530515 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b7e859cf9ad71fada470d7fc1331126-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-117\" (UID: \"7b7e859cf9ad71fada470d7fc1331126\") " pod="kube-system/kube-controller-manager-ip-172-31-30-117" Dec 16 03:29:35.530574 kubelet[3199]: I1216 03:29:35.530531 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/201dd86aa37c96035a2e577e09337b19-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-117\" (UID: \"201dd86aa37c96035a2e577e09337b19\") " pod="kube-system/kube-scheduler-ip-172-31-30-117" Dec 16 03:29:35.573915 kubelet[3199]: I1216 03:29:35.573883 3199 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-117" Dec 16 03:29:35.595283 kubelet[3199]: I1216 03:29:35.594477 3199 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-30-117" Dec 16 03:29:35.595283 kubelet[3199]: I1216 03:29:35.594581 3199 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-117" Dec 16 03:29:36.297143 kubelet[3199]: I1216 03:29:36.297011 3199 apiserver.go:52] "Watching apiserver" Dec 16 03:29:36.324721 kubelet[3199]: I1216 03:29:36.324675 3199 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:29:36.432546 kubelet[3199]: I1216 03:29:36.432370 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-117" podStartSLOduration=1.432218267 podStartE2EDuration="1.432218267s" podCreationTimestamp="2025-12-16 03:29:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:29:36.417518543 +0000 UTC m=+1.246453871" watchObservedRunningTime="2025-12-16 03:29:36.432218267 +0000 UTC m=+1.261153591" Dec 16 03:29:36.444751 kubelet[3199]: I1216 03:29:36.443823 3199 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-117" Dec 16 03:29:36.444972 kubelet[3199]: I1216 03:29:36.444951 3199 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-117" Dec 16 03:29:36.459961 kubelet[3199]: I1216 03:29:36.459899 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-117" podStartSLOduration=1.459874194 podStartE2EDuration="1.459874194s" podCreationTimestamp="2025-12-16 03:29:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:29:36.433608736 +0000 UTC m=+1.262544066" watchObservedRunningTime="2025-12-16 03:29:36.459874194 +0000 UTC m=+1.288809525" Dec 16 03:29:36.462202 kubelet[3199]: E1216 03:29:36.461846 3199 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-117\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-117" Dec 16 03:29:36.463522 kubelet[3199]: E1216 03:29:36.463490 3199 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-117\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-117" Dec 16 03:29:36.490763 kubelet[3199]: I1216 03:29:36.490594 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-117" podStartSLOduration=1.490549061 podStartE2EDuration="1.490549061s" podCreationTimestamp="2025-12-16 03:29:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:29:36.462471292 +0000 UTC m=+1.291406620" watchObservedRunningTime="2025-12-16 03:29:36.490549061 +0000 UTC m=+1.319484389" Dec 16 03:29:39.598378 kubelet[3199]: I1216 03:29:39.598342 3199 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 03:29:39.599694 containerd[1851]: time="2025-12-16T03:29:39.599661511Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 03:29:39.600045 kubelet[3199]: I1216 03:29:39.599842 3199 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 03:29:40.284883 systemd[1]: Created slice kubepods-besteffort-pod9cad8f8d_e77c_4139_902e_3add300c345c.slice - libcontainer container kubepods-besteffort-pod9cad8f8d_e77c_4139_902e_3add300c345c.slice. Dec 16 03:29:40.360554 kubelet[3199]: I1216 03:29:40.360505 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9cad8f8d-e77c-4139-902e-3add300c345c-kube-proxy\") pod \"kube-proxy-bx4rz\" (UID: \"9cad8f8d-e77c-4139-902e-3add300c345c\") " pod="kube-system/kube-proxy-bx4rz" Dec 16 03:29:40.360554 kubelet[3199]: I1216 03:29:40.360567 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkpqf\" (UniqueName: \"kubernetes.io/projected/9cad8f8d-e77c-4139-902e-3add300c345c-kube-api-access-nkpqf\") pod \"kube-proxy-bx4rz\" (UID: \"9cad8f8d-e77c-4139-902e-3add300c345c\") " pod="kube-system/kube-proxy-bx4rz" Dec 16 03:29:40.360826 kubelet[3199]: I1216 03:29:40.360595 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9cad8f8d-e77c-4139-902e-3add300c345c-lib-modules\") pod \"kube-proxy-bx4rz\" (UID: \"9cad8f8d-e77c-4139-902e-3add300c345c\") " pod="kube-system/kube-proxy-bx4rz" Dec 16 03:29:40.360826 kubelet[3199]: I1216 03:29:40.360625 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9cad8f8d-e77c-4139-902e-3add300c345c-xtables-lock\") pod \"kube-proxy-bx4rz\" (UID: \"9cad8f8d-e77c-4139-902e-3add300c345c\") " pod="kube-system/kube-proxy-bx4rz" Dec 16 03:29:40.596111 containerd[1851]: time="2025-12-16T03:29:40.595957487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bx4rz,Uid:9cad8f8d-e77c-4139-902e-3add300c345c,Namespace:kube-system,Attempt:0,}" Dec 16 03:29:40.638712 containerd[1851]: time="2025-12-16T03:29:40.638647739Z" level=info msg="connecting to shim b555b2669e799014d6a551ee0cf8cf07f062c81ace6831025af0f13ef3d9c48e" address="unix:///run/containerd/s/5dbe7a12e94ede45881daa60c001a09e653d3414978029db00d9788fc009b65a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:29:40.697737 systemd[1]: Started cri-containerd-b555b2669e799014d6a551ee0cf8cf07f062c81ace6831025af0f13ef3d9c48e.scope - libcontainer container b555b2669e799014d6a551ee0cf8cf07f062c81ace6831025af0f13ef3d9c48e. Dec 16 03:29:40.713754 systemd[1]: Created slice kubepods-besteffort-pod5e4b73a4_aabb_49da_b722_a97df77eacc0.slice - libcontainer container kubepods-besteffort-pod5e4b73a4_aabb_49da_b722_a97df77eacc0.slice. Dec 16 03:29:40.732374 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 03:29:40.732515 kernel: audit: type=1334 audit(1765855780.728:456): prog-id=143 op=LOAD Dec 16 03:29:40.728000 audit: BPF prog-id=143 op=LOAD Dec 16 03:29:40.731000 audit: BPF prog-id=144 op=LOAD Dec 16 03:29:40.739947 kernel: audit: type=1334 audit(1765855780.731:457): prog-id=144 op=LOAD Dec 16 03:29:40.740070 kernel: audit: type=1300 audit(1765855780.731:457): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3252 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:40.731000 audit[3264]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3252 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:40.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353562323636396537393930313464366135353165653063663863 Dec 16 03:29:40.747100 kernel: audit: type=1327 audit(1765855780.731:457): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353562323636396537393930313464366135353165653063663863 Dec 16 03:29:40.747236 kernel: audit: type=1334 audit(1765855780.731:458): prog-id=144 op=UNLOAD Dec 16 03:29:40.731000 audit: BPF prog-id=144 op=UNLOAD Dec 16 03:29:40.731000 audit[3264]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:40.753358 kernel: audit: type=1300 audit(1765855780.731:458): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:40.767610 kernel: audit: type=1327 audit(1765855780.731:458): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353562323636396537393930313464366135353165653063663863 Dec 16 03:29:40.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353562323636396537393930313464366135353165653063663863 Dec 16 03:29:40.767789 kubelet[3199]: I1216 03:29:40.766069 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpqzx\" (UniqueName: \"kubernetes.io/projected/5e4b73a4-aabb-49da-b722-a97df77eacc0-kube-api-access-lpqzx\") pod \"tigera-operator-7dcd859c48-hl7hh\" (UID: \"5e4b73a4-aabb-49da-b722-a97df77eacc0\") " pod="tigera-operator/tigera-operator-7dcd859c48-hl7hh" Dec 16 03:29:40.767789 kubelet[3199]: I1216 03:29:40.766150 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5e4b73a4-aabb-49da-b722-a97df77eacc0-var-lib-calico\") pod \"tigera-operator-7dcd859c48-hl7hh\" (UID: \"5e4b73a4-aabb-49da-b722-a97df77eacc0\") " pod="tigera-operator/tigera-operator-7dcd859c48-hl7hh" Dec 16 03:29:40.781204 kernel: audit: type=1334 audit(1765855780.731:459): prog-id=145 op=LOAD Dec 16 03:29:40.781339 kernel: audit: type=1300 audit(1765855780.731:459): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3252 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:40.731000 audit: BPF prog-id=145 op=LOAD Dec 16 03:29:40.731000 audit[3264]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3252 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:40.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353562323636396537393930313464366135353165653063663863 Dec 16 03:29:40.787425 kernel: audit: type=1327 audit(1765855780.731:459): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353562323636396537393930313464366135353165653063663863 Dec 16 03:29:40.731000 audit: BPF prog-id=146 op=LOAD Dec 16 03:29:40.731000 audit[3264]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3252 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:40.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353562323636396537393930313464366135353165653063663863 Dec 16 03:29:40.731000 audit: BPF prog-id=146 op=UNLOAD Dec 16 03:29:40.731000 audit[3264]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:40.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353562323636396537393930313464366135353165653063663863 Dec 16 03:29:40.731000 audit: BPF prog-id=145 op=UNLOAD Dec 16 03:29:40.731000 audit[3264]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:40.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353562323636396537393930313464366135353165653063663863 Dec 16 03:29:40.731000 audit: BPF prog-id=147 op=LOAD Dec 16 03:29:40.731000 audit[3264]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3252 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:40.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235353562323636396537393930313464366135353165653063663863 Dec 16 03:29:40.791110 containerd[1851]: time="2025-12-16T03:29:40.791056429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bx4rz,Uid:9cad8f8d-e77c-4139-902e-3add300c345c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b555b2669e799014d6a551ee0cf8cf07f062c81ace6831025af0f13ef3d9c48e\"" Dec 16 03:29:40.796421 containerd[1851]: time="2025-12-16T03:29:40.795993211Z" level=info msg="CreateContainer within sandbox \"b555b2669e799014d6a551ee0cf8cf07f062c81ace6831025af0f13ef3d9c48e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 03:29:40.825228 containerd[1851]: time="2025-12-16T03:29:40.824402086Z" level=info msg="Container b526eea3a05c0aa19c213377d1bf5780ae3ad506f3698132e3ba0759f98b9cd8: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:29:40.834027 containerd[1851]: time="2025-12-16T03:29:40.833988736Z" level=info msg="CreateContainer within sandbox \"b555b2669e799014d6a551ee0cf8cf07f062c81ace6831025af0f13ef3d9c48e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b526eea3a05c0aa19c213377d1bf5780ae3ad506f3698132e3ba0759f98b9cd8\"" Dec 16 03:29:40.835215 containerd[1851]: time="2025-12-16T03:29:40.835155110Z" level=info msg="StartContainer for \"b526eea3a05c0aa19c213377d1bf5780ae3ad506f3698132e3ba0759f98b9cd8\"" Dec 16 03:29:40.837198 containerd[1851]: time="2025-12-16T03:29:40.837151800Z" level=info msg="connecting to shim b526eea3a05c0aa19c213377d1bf5780ae3ad506f3698132e3ba0759f98b9cd8" address="unix:///run/containerd/s/5dbe7a12e94ede45881daa60c001a09e653d3414978029db00d9788fc009b65a" protocol=ttrpc version=3 Dec 16 03:29:40.861448 systemd[1]: Started cri-containerd-b526eea3a05c0aa19c213377d1bf5780ae3ad506f3698132e3ba0759f98b9cd8.scope - libcontainer container b526eea3a05c0aa19c213377d1bf5780ae3ad506f3698132e3ba0759f98b9cd8. Dec 16 03:29:40.928000 audit: BPF prog-id=148 op=LOAD Dec 16 03:29:40.928000 audit[3288]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3252 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:40.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323665656133613035633061613139633231333337376431626635 Dec 16 03:29:40.928000 audit: BPF prog-id=149 op=LOAD Dec 16 03:29:40.928000 audit[3288]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3252 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:40.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323665656133613035633061613139633231333337376431626635 Dec 16 03:29:40.928000 audit: BPF prog-id=149 op=UNLOAD Dec 16 03:29:40.928000 audit[3288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:40.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323665656133613035633061613139633231333337376431626635 Dec 16 03:29:40.928000 audit: BPF prog-id=148 op=UNLOAD Dec 16 03:29:40.928000 audit[3288]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3252 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:40.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323665656133613035633061613139633231333337376431626635 Dec 16 03:29:40.928000 audit: BPF prog-id=150 op=LOAD Dec 16 03:29:40.928000 audit[3288]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3252 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:40.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235323665656133613035633061613139633231333337376431626635 Dec 16 03:29:40.952158 containerd[1851]: time="2025-12-16T03:29:40.952118497Z" level=info msg="StartContainer for \"b526eea3a05c0aa19c213377d1bf5780ae3ad506f3698132e3ba0759f98b9cd8\" returns successfully" Dec 16 03:29:41.019703 containerd[1851]: time="2025-12-16T03:29:41.019658672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hl7hh,Uid:5e4b73a4-aabb-49da-b722-a97df77eacc0,Namespace:tigera-operator,Attempt:0,}" Dec 16 03:29:41.051633 containerd[1851]: time="2025-12-16T03:29:41.051312089Z" level=info msg="connecting to shim c42cf9093ecae5d8447be45b33b1a53d014ce2821f085946f774d851dad93e85" address="unix:///run/containerd/s/38a2b3ab94e5efe1b7a0d0590c4796fc4f2e50cf518df4ed65a0df00a964ba52" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:29:41.080925 systemd[1]: Started cri-containerd-c42cf9093ecae5d8447be45b33b1a53d014ce2821f085946f774d851dad93e85.scope - libcontainer container c42cf9093ecae5d8447be45b33b1a53d014ce2821f085946f774d851dad93e85. Dec 16 03:29:41.094000 audit: BPF prog-id=151 op=LOAD Dec 16 03:29:41.095000 audit: BPF prog-id=152 op=LOAD Dec 16 03:29:41.095000 audit[3337]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3325 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:41.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326366393039336563616535643834343762653435623333623161 Dec 16 03:29:41.095000 audit: BPF prog-id=152 op=UNLOAD Dec 16 03:29:41.095000 audit[3337]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3325 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:41.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326366393039336563616535643834343762653435623333623161 Dec 16 03:29:41.095000 audit: BPF prog-id=153 op=LOAD Dec 16 03:29:41.095000 audit[3337]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3325 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:41.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326366393039336563616535643834343762653435623333623161 Dec 16 03:29:41.095000 audit: BPF prog-id=154 op=LOAD Dec 16 03:29:41.095000 audit[3337]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3325 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:41.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326366393039336563616535643834343762653435623333623161 Dec 16 03:29:41.095000 audit: BPF prog-id=154 op=UNLOAD Dec 16 03:29:41.095000 audit[3337]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3325 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:41.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326366393039336563616535643834343762653435623333623161 Dec 16 03:29:41.095000 audit: BPF prog-id=153 op=UNLOAD Dec 16 03:29:41.095000 audit[3337]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3325 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:41.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326366393039336563616535643834343762653435623333623161 Dec 16 03:29:41.095000 audit: BPF prog-id=155 op=LOAD Dec 16 03:29:41.095000 audit[3337]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3325 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:41.095000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326366393039336563616535643834343762653435623333623161 Dec 16 03:29:41.156892 containerd[1851]: time="2025-12-16T03:29:41.155514962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hl7hh,Uid:5e4b73a4-aabb-49da-b722-a97df77eacc0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c42cf9093ecae5d8447be45b33b1a53d014ce2821f085946f774d851dad93e85\"" Dec 16 03:29:41.162530 containerd[1851]: time="2025-12-16T03:29:41.162492326Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 03:29:42.114480 update_engine[1839]: I20251216 03:29:42.114398 1839 update_attempter.cc:509] Updating boot flags... Dec 16 03:29:42.258000 audit[3410]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3410 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.258000 audit[3410]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff66f6ac90 a2=0 a3=7fff66f6ac7c items=0 ppid=3301 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.258000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:29:42.262000 audit[3411]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3411 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.262000 audit[3411]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd77ec1190 a2=0 a3=7ffd77ec117c items=0 ppid=3301 pid=3411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.262000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:29:42.263000 audit[3412]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.263000 audit[3412]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc14bb9350 a2=0 a3=7ffc14bb933c items=0 ppid=3301 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:29:42.265000 audit[3414]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3414 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.265000 audit[3414]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc63622d0 a2=0 a3=7ffcc63622bc items=0 ppid=3301 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.265000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:29:42.265000 audit[3413]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.265000 audit[3413]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc61e86f60 a2=0 a3=7ffc61e86f4c items=0 ppid=3301 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.265000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:29:42.266000 audit[3415]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.266000 audit[3415]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe89545eb0 a2=0 a3=7ffe89545e9c items=0 ppid=3301 pid=3415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.266000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:29:42.379000 audit[3489]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.379000 audit[3489]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdcf05d6e0 a2=0 a3=7ffdcf05d6cc items=0 ppid=3301 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.379000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:29:42.428000 audit[3502]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.428000 audit[3502]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc300a1580 a2=0 a3=7ffc300a156c items=0 ppid=3301 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.428000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 03:29:42.461000 audit[3505]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.461000 audit[3505]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd4b3f32f0 a2=0 a3=7ffd4b3f32dc items=0 ppid=3301 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.461000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 03:29:42.467000 audit[3506]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.467000 audit[3506]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdb2eda70 a2=0 a3=7ffcdb2eda5c items=0 ppid=3301 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.467000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:29:42.485000 audit[3508]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.485000 audit[3508]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe61cc8020 a2=0 a3=7ffe61cc800c items=0 ppid=3301 pid=3508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.485000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:29:42.489000 audit[3509]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.489000 audit[3509]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6098b2b0 a2=0 a3=7ffe6098b29c items=0 ppid=3301 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.489000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:29:42.496000 audit[3511]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.496000 audit[3511]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd7211db70 a2=0 a3=7ffd7211db5c items=0 ppid=3301 pid=3511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.496000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:29:42.510000 audit[3515]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.510000 audit[3515]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd4b1edaf0 a2=0 a3=7ffd4b1edadc items=0 ppid=3301 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.510000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 03:29:42.514000 audit[3516]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.514000 audit[3516]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8a750ae0 a2=0 a3=7ffd8a750acc items=0 ppid=3301 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.514000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:29:42.523000 audit[3518]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.523000 audit[3518]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf2de7900 a2=0 a3=7ffcf2de78ec items=0 ppid=3301 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.523000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:29:42.527000 audit[3519]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.527000 audit[3519]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeaa6da030 a2=0 a3=7ffeaa6da01c items=0 ppid=3301 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.527000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:29:42.542000 audit[3522]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.542000 audit[3522]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff38c8b910 a2=0 a3=7fff38c8b8fc items=0 ppid=3301 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.542000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:29:42.559000 audit[3529]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.559000 audit[3529]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd4230e430 a2=0 a3=7ffd4230e41c items=0 ppid=3301 pid=3529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.559000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:29:42.572000 audit[3533]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.572000 audit[3533]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc1e693300 a2=0 a3=7ffc1e6932ec items=0 ppid=3301 pid=3533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.572000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:29:42.574000 audit[3534]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.574000 audit[3534]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd08ce77b0 a2=0 a3=7ffd08ce779c items=0 ppid=3301 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.574000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:29:42.587000 audit[3536]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.587000 audit[3536]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe85910ab0 a2=0 a3=7ffe85910a9c items=0 ppid=3301 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.587000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:29:42.604000 audit[3539]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.604000 audit[3539]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd4ce56b90 a2=0 a3=7ffd4ce56b7c items=0 ppid=3301 pid=3539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.604000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:29:42.612000 audit[3540]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.612000 audit[3540]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed0b4df20 a2=0 a3=7ffed0b4df0c items=0 ppid=3301 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.612000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:29:42.628000 audit[3544]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:29:42.628000 audit[3544]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcc4a8b730 a2=0 a3=7ffcc4a8b71c items=0 ppid=3301 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.628000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:29:42.703000 audit[3584]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:42.703000 audit[3584]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe67e8dc30 a2=0 a3=7ffe67e8dc1c items=0 ppid=3301 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:42.715000 audit[3584]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:42.715000 audit[3584]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe67e8dc30 a2=0 a3=7ffe67e8dc1c items=0 ppid=3301 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.715000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:42.724000 audit[3616]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3616 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.724000 audit[3616]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc8a25c7f0 a2=0 a3=7ffc8a25c7dc items=0 ppid=3301 pid=3616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.724000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:29:42.731000 audit[3625]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3625 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.731000 audit[3625]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe2d6dbb70 a2=0 a3=7ffe2d6dbb5c items=0 ppid=3301 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.731000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 03:29:42.741000 audit[3632]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3632 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.741000 audit[3632]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff69f07de0 a2=0 a3=7fff69f07dcc items=0 ppid=3301 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.741000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 03:29:42.744000 audit[3635]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3635 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.744000 audit[3635]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce61c50f0 a2=0 a3=7ffce61c50dc items=0 ppid=3301 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.744000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:29:42.755000 audit[3638]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3638 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.755000 audit[3638]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd7fb01370 a2=0 a3=7ffd7fb0135c items=0 ppid=3301 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.755000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:29:42.767000 audit[3639]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3639 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.767000 audit[3639]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce0ee2b70 a2=0 a3=7ffce0ee2b5c items=0 ppid=3301 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.767000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:29:42.776000 audit[3642]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3642 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.776000 audit[3642]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffde9f3c950 a2=0 a3=7ffde9f3c93c items=0 ppid=3301 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.776000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 03:29:42.783000 audit[3645]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3645 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.783000 audit[3645]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe55cde6e0 a2=0 a3=7ffe55cde6cc items=0 ppid=3301 pid=3645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.783000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:29:42.785000 audit[3646]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3646 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.785000 audit[3646]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2e6997a0 a2=0 a3=7fff2e69978c items=0 ppid=3301 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.785000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:29:42.790000 audit[3648]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3648 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.790000 audit[3648]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd5d0699d0 a2=0 a3=7ffd5d0699bc items=0 ppid=3301 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:29:42.792000 audit[3649]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3649 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.792000 audit[3649]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8a5243c0 a2=0 a3=7ffd8a5243ac items=0 ppid=3301 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.792000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:29:42.796000 audit[3651]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3651 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.796000 audit[3651]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffce9cf52e0 a2=0 a3=7ffce9cf52cc items=0 ppid=3301 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.796000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:29:42.805000 audit[3654]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3654 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.805000 audit[3654]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffdb558340 a2=0 a3=7fffdb55832c items=0 ppid=3301 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:29:42.826000 audit[3658]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3658 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.826000 audit[3658]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff9c79f390 a2=0 a3=7fff9c79f37c items=0 ppid=3301 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.826000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 03:29:42.835000 audit[3659]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3659 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.835000 audit[3659]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe7e5416f0 a2=0 a3=7ffe7e5416dc items=0 ppid=3301 pid=3659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.835000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:29:42.843000 audit[3661]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3661 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.843000 audit[3661]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe8c78da60 a2=0 a3=7ffe8c78da4c items=0 ppid=3301 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.843000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:29:42.853000 audit[3665]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3665 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.853000 audit[3665]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffb08e4620 a2=0 a3=7fffb08e460c items=0 ppid=3301 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.853000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:29:42.856000 audit[3666]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3666 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.856000 audit[3666]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd87b7dd70 a2=0 a3=7ffd87b7dd5c items=0 ppid=3301 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.856000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:29:42.861000 audit[3668]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3668 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.861000 audit[3668]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffef0cb32b0 a2=0 a3=7ffef0cb329c items=0 ppid=3301 pid=3668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.861000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:29:42.864000 audit[3669]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3669 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.864000 audit[3669]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1a2a03d0 a2=0 a3=7fff1a2a03bc items=0 ppid=3301 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.864000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:29:42.871000 audit[3671]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3671 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.871000 audit[3671]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe2ad5c4b0 a2=0 a3=7ffe2ad5c49c items=0 ppid=3301 pid=3671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.871000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:29:42.881000 audit[3674]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3674 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:29:42.881000 audit[3674]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff4ebb64a0 a2=0 a3=7fff4ebb648c items=0 ppid=3301 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.881000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:29:42.895000 audit[3676]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3676 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:29:42.895000 audit[3676]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffdce2eb4e0 a2=0 a3=7ffdce2eb4cc items=0 ppid=3301 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.895000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:42.896000 audit[3676]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3676 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:29:42.896000 audit[3676]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffdce2eb4e0 a2=0 a3=7ffdce2eb4cc items=0 ppid=3301 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:42.896000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:43.246624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4253246566.mount: Deactivated successfully. Dec 16 03:29:43.535654 kubelet[3199]: I1216 03:29:43.534850 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bx4rz" podStartSLOduration=3.534833681 podStartE2EDuration="3.534833681s" podCreationTimestamp="2025-12-16 03:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:29:41.491720604 +0000 UTC m=+6.320655935" watchObservedRunningTime="2025-12-16 03:29:43.534833681 +0000 UTC m=+8.363769011" Dec 16 03:29:44.423766 containerd[1851]: time="2025-12-16T03:29:44.423709998Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:44.424848 containerd[1851]: time="2025-12-16T03:29:44.424801754Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Dec 16 03:29:44.430143 containerd[1851]: time="2025-12-16T03:29:44.426567922Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:44.475592 containerd[1851]: time="2025-12-16T03:29:44.474845659Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:29:44.476770 containerd[1851]: time="2025-12-16T03:29:44.476741858Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.314010166s" Dec 16 03:29:44.476951 containerd[1851]: time="2025-12-16T03:29:44.476933634Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 03:29:44.483968 containerd[1851]: time="2025-12-16T03:29:44.483910421Z" level=info msg="CreateContainer within sandbox \"c42cf9093ecae5d8447be45b33b1a53d014ce2821f085946f774d851dad93e85\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 03:29:44.502391 containerd[1851]: time="2025-12-16T03:29:44.502335679Z" level=info msg="Container c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:29:44.512829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3312033800.mount: Deactivated successfully. Dec 16 03:29:44.517896 containerd[1851]: time="2025-12-16T03:29:44.517845045Z" level=info msg="CreateContainer within sandbox \"c42cf9093ecae5d8447be45b33b1a53d014ce2821f085946f774d851dad93e85\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267\"" Dec 16 03:29:44.519387 containerd[1851]: time="2025-12-16T03:29:44.519343448Z" level=info msg="StartContainer for \"c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267\"" Dec 16 03:29:44.520463 containerd[1851]: time="2025-12-16T03:29:44.520427297Z" level=info msg="connecting to shim c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267" address="unix:///run/containerd/s/38a2b3ab94e5efe1b7a0d0590c4796fc4f2e50cf518df4ed65a0df00a964ba52" protocol=ttrpc version=3 Dec 16 03:29:44.550481 systemd[1]: Started cri-containerd-c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267.scope - libcontainer container c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267. Dec 16 03:29:44.566000 audit: BPF prog-id=156 op=LOAD Dec 16 03:29:44.567000 audit: BPF prog-id=157 op=LOAD Dec 16 03:29:44.567000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3325 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:44.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335353135663936373761363232386235356438353262303436363066 Dec 16 03:29:44.567000 audit: BPF prog-id=157 op=UNLOAD Dec 16 03:29:44.567000 audit[3774]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3325 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:44.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335353135663936373761363232386235356438353262303436363066 Dec 16 03:29:44.568000 audit: BPF prog-id=158 op=LOAD Dec 16 03:29:44.568000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3325 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:44.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335353135663936373761363232386235356438353262303436363066 Dec 16 03:29:44.568000 audit: BPF prog-id=159 op=LOAD Dec 16 03:29:44.568000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3325 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:44.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335353135663936373761363232386235356438353262303436363066 Dec 16 03:29:44.568000 audit: BPF prog-id=159 op=UNLOAD Dec 16 03:29:44.568000 audit[3774]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3325 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:44.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335353135663936373761363232386235356438353262303436363066 Dec 16 03:29:44.568000 audit: BPF prog-id=158 op=UNLOAD Dec 16 03:29:44.568000 audit[3774]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3325 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:44.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335353135663936373761363232386235356438353262303436363066 Dec 16 03:29:44.568000 audit: BPF prog-id=160 op=LOAD Dec 16 03:29:44.568000 audit[3774]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3325 pid=3774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:44.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335353135663936373761363232386235356438353262303436363066 Dec 16 03:29:44.594518 containerd[1851]: time="2025-12-16T03:29:44.594426772Z" level=info msg="StartContainer for \"c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267\" returns successfully" Dec 16 03:29:52.157563 sudo[2232]: pam_unix(sudo:session): session closed for user root Dec 16 03:29:52.156000 audit[2232]: USER_END pid=2232 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:29:52.160111 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 03:29:52.160256 kernel: audit: type=1106 audit(1765855792.156:536): pid=2232 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:29:52.156000 audit[2232]: CRED_DISP pid=2232 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:29:52.173206 kernel: audit: type=1104 audit(1765855792.156:537): pid=2232 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:29:52.186459 sshd[2231]: Connection closed by 147.75.109.163 port 46892 Dec 16 03:29:52.188386 sshd-session[2227]: pam_unix(sshd:session): session closed for user core Dec 16 03:29:52.191000 audit[2227]: USER_END pid=2227 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:52.201195 kernel: audit: type=1106 audit(1765855792.191:538): pid=2227 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:52.204605 systemd[1]: sshd@6-172.31.30.117:22-147.75.109.163:46892.service: Deactivated successfully. Dec 16 03:29:52.191000 audit[2227]: CRED_DISP pid=2227 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:52.216193 kernel: audit: type=1104 audit(1765855792.191:539): pid=2227 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:29:52.217665 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 03:29:52.218063 systemd[1]: session-8.scope: Consumed 4.360s CPU time, 151.3M memory peak. Dec 16 03:29:52.219568 systemd-logind[1837]: Session 8 logged out. Waiting for processes to exit. Dec 16 03:29:52.231276 kernel: audit: type=1131 audit(1765855792.203:540): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.30.117:22-147.75.109.163:46892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:52.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.30.117:22-147.75.109.163:46892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:29:52.231569 systemd-logind[1837]: Removed session 8. Dec 16 03:29:53.180000 audit[3858]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3858 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:53.185502 kernel: audit: type=1325 audit(1765855793.180:541): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3858 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:53.180000 audit[3858]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffea211da70 a2=0 a3=7ffea211da5c items=0 ppid=3301 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:53.194856 kernel: audit: type=1300 audit(1765855793.180:541): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffea211da70 a2=0 a3=7ffea211da5c items=0 ppid=3301 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:53.180000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:53.199308 kernel: audit: type=1327 audit(1765855793.180:541): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:53.200000 audit[3858]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3858 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:53.206198 kernel: audit: type=1325 audit(1765855793.200:542): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3858 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:53.200000 audit[3858]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea211da70 a2=0 a3=0 items=0 ppid=3301 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:53.214400 kernel: audit: type=1300 audit(1765855793.200:542): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea211da70 a2=0 a3=0 items=0 ppid=3301 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:53.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:53.221000 audit[3860]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3860 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:53.221000 audit[3860]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffef61b8160 a2=0 a3=7ffef61b814c items=0 ppid=3301 pid=3860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:53.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:53.230000 audit[3860]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3860 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:53.230000 audit[3860]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffef61b8160 a2=0 a3=0 items=0 ppid=3301 pid=3860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:53.230000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:56.954000 audit[3864]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3864 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:56.954000 audit[3864]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcf5e97f10 a2=0 a3=7ffcf5e97efc items=0 ppid=3301 pid=3864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:56.954000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:56.973000 audit[3864]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3864 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:56.973000 audit[3864]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf5e97f10 a2=0 a3=0 items=0 ppid=3301 pid=3864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:56.973000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:57.015000 audit[3866]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:57.015000 audit[3866]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffc6a6b930 a2=0 a3=7fffc6a6b91c items=0 ppid=3301 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:57.015000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:57.021000 audit[3866]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:57.021000 audit[3866]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffc6a6b930 a2=0 a3=0 items=0 ppid=3301 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:57.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:58.087549 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 03:29:58.087693 kernel: audit: type=1325 audit(1765855798.081:549): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:58.081000 audit[3870]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:58.094270 kernel: audit: type=1300 audit(1765855798.081:549): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe0713b330 a2=0 a3=7ffe0713b31c items=0 ppid=3301 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:58.081000 audit[3870]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe0713b330 a2=0 a3=7ffe0713b31c items=0 ppid=3301 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:58.097773 kernel: audit: type=1327 audit(1765855798.081:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:58.081000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:58.101236 kernel: audit: type=1325 audit(1765855798.089:550): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:58.089000 audit[3870]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:58.089000 audit[3870]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe0713b330 a2=0 a3=0 items=0 ppid=3301 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:58.113832 kernel: audit: type=1300 audit(1765855798.089:550): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe0713b330 a2=0 a3=0 items=0 ppid=3301 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:58.113972 kernel: audit: type=1327 audit(1765855798.089:550): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:58.089000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:59.284000 audit[3872]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:59.296663 kernel: audit: type=1325 audit(1765855799.284:551): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:59.296791 kernel: audit: type=1300 audit(1765855799.284:551): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcef24f000 a2=0 a3=7ffcef24efec items=0 ppid=3301 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:59.284000 audit[3872]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcef24f000 a2=0 a3=7ffcef24efec items=0 ppid=3301 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:59.284000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:59.302280 kernel: audit: type=1327 audit(1765855799.284:551): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:59.297000 audit[3872]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:59.306599 kernel: audit: type=1325 audit(1765855799.297:552): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:29:59.297000 audit[3872]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcef24f000 a2=0 a3=0 items=0 ppid=3301 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:59.297000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:29:59.330215 kubelet[3199]: I1216 03:29:59.329285 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-hl7hh" podStartSLOduration=16.011853746 podStartE2EDuration="19.329213657s" podCreationTimestamp="2025-12-16 03:29:40 +0000 UTC" firstStartedPulling="2025-12-16 03:29:41.160868638 +0000 UTC m=+5.989803970" lastFinishedPulling="2025-12-16 03:29:44.478228571 +0000 UTC m=+9.307163881" observedRunningTime="2025-12-16 03:29:45.502467412 +0000 UTC m=+10.331402743" watchObservedRunningTime="2025-12-16 03:29:59.329213657 +0000 UTC m=+24.158149164" Dec 16 03:29:59.345043 systemd[1]: Created slice kubepods-besteffort-pod38843700_b322_4803_9aa7_6ab49b50e81d.slice - libcontainer container kubepods-besteffort-pod38843700_b322_4803_9aa7_6ab49b50e81d.slice. Dec 16 03:29:59.428380 kubelet[3199]: I1216 03:29:59.428335 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/38843700-b322-4803-9aa7-6ab49b50e81d-typha-certs\") pod \"calico-typha-85964f4bc5-4jbhc\" (UID: \"38843700-b322-4803-9aa7-6ab49b50e81d\") " pod="calico-system/calico-typha-85964f4bc5-4jbhc" Dec 16 03:29:59.428547 kubelet[3199]: I1216 03:29:59.428427 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgcld\" (UniqueName: \"kubernetes.io/projected/38843700-b322-4803-9aa7-6ab49b50e81d-kube-api-access-vgcld\") pod \"calico-typha-85964f4bc5-4jbhc\" (UID: \"38843700-b322-4803-9aa7-6ab49b50e81d\") " pod="calico-system/calico-typha-85964f4bc5-4jbhc" Dec 16 03:29:59.428547 kubelet[3199]: I1216 03:29:59.428464 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38843700-b322-4803-9aa7-6ab49b50e81d-tigera-ca-bundle\") pod \"calico-typha-85964f4bc5-4jbhc\" (UID: \"38843700-b322-4803-9aa7-6ab49b50e81d\") " pod="calico-system/calico-typha-85964f4bc5-4jbhc" Dec 16 03:29:59.612011 systemd[1]: Created slice kubepods-besteffort-pod9c9cc891_255e_45da_9628_24ee399ddb12.slice - libcontainer container kubepods-besteffort-pod9c9cc891_255e_45da_9628_24ee399ddb12.slice. Dec 16 03:29:59.653036 containerd[1851]: time="2025-12-16T03:29:59.652987427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85964f4bc5-4jbhc,Uid:38843700-b322-4803-9aa7-6ab49b50e81d,Namespace:calico-system,Attempt:0,}" Dec 16 03:29:59.681016 containerd[1851]: time="2025-12-16T03:29:59.680959343Z" level=info msg="connecting to shim e103a0288cc74246b44f03331e84dff8ed75ce0a4507ecefa88f375e6406a087" address="unix:///run/containerd/s/e8cc3bd724bcaca2310db2b739ea2e329647d0bc5bd9aa4f7eacd8e9bee8c64a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:29:59.726491 systemd[1]: Started cri-containerd-e103a0288cc74246b44f03331e84dff8ed75ce0a4507ecefa88f375e6406a087.scope - libcontainer container e103a0288cc74246b44f03331e84dff8ed75ce0a4507ecefa88f375e6406a087. Dec 16 03:29:59.730432 kubelet[3199]: I1216 03:29:59.730143 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9c9cc891-255e-45da-9628-24ee399ddb12-var-run-calico\") pod \"calico-node-h99kj\" (UID: \"9c9cc891-255e-45da-9628-24ee399ddb12\") " pod="calico-system/calico-node-h99kj" Dec 16 03:29:59.730573 kubelet[3199]: I1216 03:29:59.730539 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9c9cc891-255e-45da-9628-24ee399ddb12-cni-bin-dir\") pod \"calico-node-h99kj\" (UID: \"9c9cc891-255e-45da-9628-24ee399ddb12\") " pod="calico-system/calico-node-h99kj" Dec 16 03:29:59.730639 kubelet[3199]: I1216 03:29:59.730612 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c9cc891-255e-45da-9628-24ee399ddb12-tigera-ca-bundle\") pod \"calico-node-h99kj\" (UID: \"9c9cc891-255e-45da-9628-24ee399ddb12\") " pod="calico-system/calico-node-h99kj" Dec 16 03:29:59.730727 kubelet[3199]: I1216 03:29:59.730642 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9c9cc891-255e-45da-9628-24ee399ddb12-var-lib-calico\") pod \"calico-node-h99kj\" (UID: \"9c9cc891-255e-45da-9628-24ee399ddb12\") " pod="calico-system/calico-node-h99kj" Dec 16 03:29:59.730884 kubelet[3199]: I1216 03:29:59.730718 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9c9cc891-255e-45da-9628-24ee399ddb12-flexvol-driver-host\") pod \"calico-node-h99kj\" (UID: \"9c9cc891-255e-45da-9628-24ee399ddb12\") " pod="calico-system/calico-node-h99kj" Dec 16 03:29:59.730943 kubelet[3199]: I1216 03:29:59.730920 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c9cc891-255e-45da-9628-24ee399ddb12-xtables-lock\") pod \"calico-node-h99kj\" (UID: \"9c9cc891-255e-45da-9628-24ee399ddb12\") " pod="calico-system/calico-node-h99kj" Dec 16 03:29:59.731128 kubelet[3199]: I1216 03:29:59.730988 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9c9cc891-255e-45da-9628-24ee399ddb12-cni-log-dir\") pod \"calico-node-h99kj\" (UID: \"9c9cc891-255e-45da-9628-24ee399ddb12\") " pod="calico-system/calico-node-h99kj" Dec 16 03:29:59.731128 kubelet[3199]: I1216 03:29:59.731065 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c9cc891-255e-45da-9628-24ee399ddb12-lib-modules\") pod \"calico-node-h99kj\" (UID: \"9c9cc891-255e-45da-9628-24ee399ddb12\") " pod="calico-system/calico-node-h99kj" Dec 16 03:29:59.731279 kubelet[3199]: I1216 03:29:59.731259 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9c9cc891-255e-45da-9628-24ee399ddb12-node-certs\") pod \"calico-node-h99kj\" (UID: \"9c9cc891-255e-45da-9628-24ee399ddb12\") " pod="calico-system/calico-node-h99kj" Dec 16 03:29:59.731869 kubelet[3199]: I1216 03:29:59.731809 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9c9cc891-255e-45da-9628-24ee399ddb12-cni-net-dir\") pod \"calico-node-h99kj\" (UID: \"9c9cc891-255e-45da-9628-24ee399ddb12\") " pod="calico-system/calico-node-h99kj" Dec 16 03:29:59.731952 kubelet[3199]: I1216 03:29:59.731936 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9c9cc891-255e-45da-9628-24ee399ddb12-policysync\") pod \"calico-node-h99kj\" (UID: \"9c9cc891-255e-45da-9628-24ee399ddb12\") " pod="calico-system/calico-node-h99kj" Dec 16 03:29:59.732183 kubelet[3199]: I1216 03:29:59.732012 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w498d\" (UniqueName: \"kubernetes.io/projected/9c9cc891-255e-45da-9628-24ee399ddb12-kube-api-access-w498d\") pod \"calico-node-h99kj\" (UID: \"9c9cc891-255e-45da-9628-24ee399ddb12\") " pod="calico-system/calico-node-h99kj" Dec 16 03:29:59.741000 audit: BPF prog-id=161 op=LOAD Dec 16 03:29:59.742000 audit: BPF prog-id=162 op=LOAD Dec 16 03:29:59.742000 audit[3895]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:59.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303361303238386363373432343662343466303333333165383464 Dec 16 03:29:59.742000 audit: BPF prog-id=162 op=UNLOAD Dec 16 03:29:59.742000 audit[3895]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:59.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303361303238386363373432343662343466303333333165383464 Dec 16 03:29:59.742000 audit: BPF prog-id=163 op=LOAD Dec 16 03:29:59.742000 audit[3895]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:59.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303361303238386363373432343662343466303333333165383464 Dec 16 03:29:59.743000 audit: BPF prog-id=164 op=LOAD Dec 16 03:29:59.743000 audit[3895]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:59.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303361303238386363373432343662343466303333333165383464 Dec 16 03:29:59.743000 audit: BPF prog-id=164 op=UNLOAD Dec 16 03:29:59.743000 audit[3895]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:59.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303361303238386363373432343662343466303333333165383464 Dec 16 03:29:59.743000 audit: BPF prog-id=163 op=UNLOAD Dec 16 03:29:59.743000 audit[3895]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:59.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303361303238386363373432343662343466303333333165383464 Dec 16 03:29:59.743000 audit: BPF prog-id=165 op=LOAD Dec 16 03:29:59.743000 audit[3895]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3884 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:29:59.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303361303238386363373432343662343466303333333165383464 Dec 16 03:29:59.813734 kubelet[3199]: E1216 03:29:59.811901 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:29:59.867509 containerd[1851]: time="2025-12-16T03:29:59.867302957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85964f4bc5-4jbhc,Uid:38843700-b322-4803-9aa7-6ab49b50e81d,Namespace:calico-system,Attempt:0,} returns sandbox id \"e103a0288cc74246b44f03331e84dff8ed75ce0a4507ecefa88f375e6406a087\"" Dec 16 03:29:59.874909 kubelet[3199]: E1216 03:29:59.874396 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.876456 kubelet[3199]: W1216 03:29:59.876347 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.880197 kubelet[3199]: E1216 03:29:59.878636 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.894419 containerd[1851]: time="2025-12-16T03:29:59.892532290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 03:29:59.895312 kubelet[3199]: E1216 03:29:59.895282 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.895438 kubelet[3199]: W1216 03:29:59.895312 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.895438 kubelet[3199]: E1216 03:29:59.895357 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.900656 kubelet[3199]: E1216 03:29:59.895625 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.900656 kubelet[3199]: W1216 03:29:59.895639 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.900656 kubelet[3199]: E1216 03:29:59.895656 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.900656 kubelet[3199]: E1216 03:29:59.895900 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.900656 kubelet[3199]: W1216 03:29:59.895913 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.900656 kubelet[3199]: E1216 03:29:59.895933 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.900656 kubelet[3199]: E1216 03:29:59.896264 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.900656 kubelet[3199]: W1216 03:29:59.896278 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.900656 kubelet[3199]: E1216 03:29:59.896296 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.900656 kubelet[3199]: E1216 03:29:59.896521 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.901261 kubelet[3199]: W1216 03:29:59.896531 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.901261 kubelet[3199]: E1216 03:29:59.896544 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.901261 kubelet[3199]: E1216 03:29:59.896724 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.901261 kubelet[3199]: W1216 03:29:59.896734 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.901261 kubelet[3199]: E1216 03:29:59.896746 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.901261 kubelet[3199]: E1216 03:29:59.896961 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.901261 kubelet[3199]: W1216 03:29:59.896971 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.901261 kubelet[3199]: E1216 03:29:59.896986 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.901261 kubelet[3199]: E1216 03:29:59.897199 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.901261 kubelet[3199]: W1216 03:29:59.897209 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.902022 kubelet[3199]: E1216 03:29:59.897221 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.902022 kubelet[3199]: E1216 03:29:59.897416 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.902022 kubelet[3199]: W1216 03:29:59.897429 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.902022 kubelet[3199]: E1216 03:29:59.897447 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.902022 kubelet[3199]: E1216 03:29:59.897614 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.902022 kubelet[3199]: W1216 03:29:59.897624 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.902022 kubelet[3199]: E1216 03:29:59.897641 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.902022 kubelet[3199]: E1216 03:29:59.897804 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.902022 kubelet[3199]: W1216 03:29:59.897813 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.902022 kubelet[3199]: E1216 03:29:59.897833 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.902350 kubelet[3199]: E1216 03:29:59.898004 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.902350 kubelet[3199]: W1216 03:29:59.898020 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.902350 kubelet[3199]: E1216 03:29:59.898031 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.902350 kubelet[3199]: E1216 03:29:59.898274 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.902350 kubelet[3199]: W1216 03:29:59.898284 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.902350 kubelet[3199]: E1216 03:29:59.898297 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.902350 kubelet[3199]: E1216 03:29:59.898495 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.902350 kubelet[3199]: W1216 03:29:59.898506 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.902350 kubelet[3199]: E1216 03:29:59.898519 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.902350 kubelet[3199]: E1216 03:29:59.898695 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.902652 kubelet[3199]: W1216 03:29:59.898705 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.902652 kubelet[3199]: E1216 03:29:59.898716 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.902652 kubelet[3199]: E1216 03:29:59.899056 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.902652 kubelet[3199]: W1216 03:29:59.899068 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.902652 kubelet[3199]: E1216 03:29:59.899083 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.902652 kubelet[3199]: E1216 03:29:59.899327 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.902652 kubelet[3199]: W1216 03:29:59.899348 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.902652 kubelet[3199]: E1216 03:29:59.899361 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.902652 kubelet[3199]: E1216 03:29:59.899535 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.902652 kubelet[3199]: W1216 03:29:59.899545 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.902918 kubelet[3199]: E1216 03:29:59.899558 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.902918 kubelet[3199]: E1216 03:29:59.899792 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.902918 kubelet[3199]: W1216 03:29:59.900345 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.902918 kubelet[3199]: E1216 03:29:59.900380 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.902918 kubelet[3199]: E1216 03:29:59.900611 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.902918 kubelet[3199]: W1216 03:29:59.900623 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.902918 kubelet[3199]: E1216 03:29:59.900637 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.916420 kubelet[3199]: E1216 03:29:59.916389 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.916716 kubelet[3199]: W1216 03:29:59.916615 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.916810 kubelet[3199]: E1216 03:29:59.916650 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.919406 containerd[1851]: time="2025-12-16T03:29:59.919369027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h99kj,Uid:9c9cc891-255e-45da-9628-24ee399ddb12,Namespace:calico-system,Attempt:0,}" Dec 16 03:29:59.933717 kubelet[3199]: E1216 03:29:59.933310 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.933717 kubelet[3199]: W1216 03:29:59.933336 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.933717 kubelet[3199]: E1216 03:29:59.933363 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.933717 kubelet[3199]: I1216 03:29:59.933403 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/39c8b25e-ea78-443a-855e-e43746267826-registration-dir\") pod \"csi-node-driver-pl4wm\" (UID: \"39c8b25e-ea78-443a-855e-e43746267826\") " pod="calico-system/csi-node-driver-pl4wm" Dec 16 03:29:59.936447 kubelet[3199]: E1216 03:29:59.933756 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.936447 kubelet[3199]: W1216 03:29:59.933768 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.936447 kubelet[3199]: E1216 03:29:59.933813 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.936447 kubelet[3199]: E1216 03:29:59.934076 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.936447 kubelet[3199]: W1216 03:29:59.934090 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.936447 kubelet[3199]: E1216 03:29:59.934353 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.936447 kubelet[3199]: E1216 03:29:59.934420 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.936447 kubelet[3199]: W1216 03:29:59.934432 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.936447 kubelet[3199]: E1216 03:29:59.934445 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.936827 kubelet[3199]: I1216 03:29:59.934474 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/39c8b25e-ea78-443a-855e-e43746267826-varrun\") pod \"csi-node-driver-pl4wm\" (UID: \"39c8b25e-ea78-443a-855e-e43746267826\") " pod="calico-system/csi-node-driver-pl4wm" Dec 16 03:29:59.936827 kubelet[3199]: E1216 03:29:59.935198 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.936827 kubelet[3199]: W1216 03:29:59.935212 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.936827 kubelet[3199]: E1216 03:29:59.935239 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.936827 kubelet[3199]: I1216 03:29:59.935266 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/39c8b25e-ea78-443a-855e-e43746267826-socket-dir\") pod \"csi-node-driver-pl4wm\" (UID: \"39c8b25e-ea78-443a-855e-e43746267826\") " pod="calico-system/csi-node-driver-pl4wm" Dec 16 03:29:59.936827 kubelet[3199]: E1216 03:29:59.935558 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.936827 kubelet[3199]: W1216 03:29:59.935569 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.936827 kubelet[3199]: E1216 03:29:59.935597 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.936827 kubelet[3199]: E1216 03:29:59.935853 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.939280 kubelet[3199]: W1216 03:29:59.935864 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.939280 kubelet[3199]: E1216 03:29:59.935891 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.939280 kubelet[3199]: E1216 03:29:59.936356 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.939280 kubelet[3199]: W1216 03:29:59.936372 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.939280 kubelet[3199]: E1216 03:29:59.936400 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.939280 kubelet[3199]: I1216 03:29:59.936688 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmrnc\" (UniqueName: \"kubernetes.io/projected/39c8b25e-ea78-443a-855e-e43746267826-kube-api-access-zmrnc\") pod \"csi-node-driver-pl4wm\" (UID: \"39c8b25e-ea78-443a-855e-e43746267826\") " pod="calico-system/csi-node-driver-pl4wm" Dec 16 03:29:59.939280 kubelet[3199]: E1216 03:29:59.936793 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.939280 kubelet[3199]: W1216 03:29:59.936804 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.939280 kubelet[3199]: E1216 03:29:59.936822 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.939684 kubelet[3199]: E1216 03:29:59.937110 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.939684 kubelet[3199]: W1216 03:29:59.937125 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.939684 kubelet[3199]: E1216 03:29:59.937149 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.939684 kubelet[3199]: E1216 03:29:59.938278 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.939684 kubelet[3199]: W1216 03:29:59.938295 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.939684 kubelet[3199]: E1216 03:29:59.938320 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.939684 kubelet[3199]: I1216 03:29:59.938560 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39c8b25e-ea78-443a-855e-e43746267826-kubelet-dir\") pod \"csi-node-driver-pl4wm\" (UID: \"39c8b25e-ea78-443a-855e-e43746267826\") " pod="calico-system/csi-node-driver-pl4wm" Dec 16 03:29:59.939684 kubelet[3199]: E1216 03:29:59.938670 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.939684 kubelet[3199]: W1216 03:29:59.938680 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.940032 kubelet[3199]: E1216 03:29:59.938697 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.940032 kubelet[3199]: E1216 03:29:59.938922 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.940032 kubelet[3199]: W1216 03:29:59.938935 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.940032 kubelet[3199]: E1216 03:29:59.938959 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.940032 kubelet[3199]: E1216 03:29:59.939223 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.940032 kubelet[3199]: W1216 03:29:59.939233 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.940032 kubelet[3199]: E1216 03:29:59.939246 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.940032 kubelet[3199]: E1216 03:29:59.939428 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:29:59.940032 kubelet[3199]: W1216 03:29:59.939438 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:29:59.940032 kubelet[3199]: E1216 03:29:59.939450 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:29:59.966372 containerd[1851]: time="2025-12-16T03:29:59.966318429Z" level=info msg="connecting to shim c604e1f5e0c1767ccd966137b408a7131942a0c0b98fb6a7a300b5ca75d56e75" address="unix:///run/containerd/s/0784cc1695a03b3635eac28cd30895e50552857ece8073cb596f396c2006152d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:30:00.016327 systemd[1]: Started cri-containerd-c604e1f5e0c1767ccd966137b408a7131942a0c0b98fb6a7a300b5ca75d56e75.scope - libcontainer container c604e1f5e0c1767ccd966137b408a7131942a0c0b98fb6a7a300b5ca75d56e75. Dec 16 03:30:00.043206 kubelet[3199]: E1216 03:30:00.041109 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.043206 kubelet[3199]: W1216 03:30:00.041244 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.043206 kubelet[3199]: E1216 03:30:00.041325 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.044907 kubelet[3199]: E1216 03:30:00.044869 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.045440 kubelet[3199]: W1216 03:30:00.045145 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.045440 kubelet[3199]: E1216 03:30:00.045210 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.045946 kubelet[3199]: E1216 03:30:00.045926 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.046434 kubelet[3199]: W1216 03:30:00.046156 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.046434 kubelet[3199]: E1216 03:30:00.046396 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.048003 kubelet[3199]: E1216 03:30:00.047908 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.048003 kubelet[3199]: W1216 03:30:00.047930 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.048003 kubelet[3199]: E1216 03:30:00.047968 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.049288 kubelet[3199]: E1216 03:30:00.048974 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.049288 kubelet[3199]: W1216 03:30:00.049206 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.049288 kubelet[3199]: E1216 03:30:00.049249 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.051427 kubelet[3199]: E1216 03:30:00.051085 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.051427 kubelet[3199]: W1216 03:30:00.051112 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.051427 kubelet[3199]: E1216 03:30:00.051181 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.056092 kubelet[3199]: E1216 03:30:00.055883 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.056092 kubelet[3199]: W1216 03:30:00.055913 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.056092 kubelet[3199]: E1216 03:30:00.055992 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.057197 kubelet[3199]: E1216 03:30:00.056407 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.057197 kubelet[3199]: W1216 03:30:00.056427 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.057197 kubelet[3199]: E1216 03:30:00.056535 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.057197 kubelet[3199]: E1216 03:30:00.056832 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.057197 kubelet[3199]: W1216 03:30:00.056845 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.057197 kubelet[3199]: E1216 03:30:00.056974 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.057653 kubelet[3199]: E1216 03:30:00.057218 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.057653 kubelet[3199]: W1216 03:30:00.057230 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.057653 kubelet[3199]: E1216 03:30:00.057339 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.058282 kubelet[3199]: E1216 03:30:00.057818 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.058282 kubelet[3199]: W1216 03:30:00.057837 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.058282 kubelet[3199]: E1216 03:30:00.058010 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.058282 kubelet[3199]: E1216 03:30:00.058108 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.058282 kubelet[3199]: W1216 03:30:00.058148 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.058703 kubelet[3199]: E1216 03:30:00.058510 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.058703 kubelet[3199]: E1216 03:30:00.058615 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.058703 kubelet[3199]: W1216 03:30:00.058626 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.058858 kubelet[3199]: E1216 03:30:00.058741 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.059672 kubelet[3199]: E1216 03:30:00.059001 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.059672 kubelet[3199]: W1216 03:30:00.059113 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.059672 kubelet[3199]: E1216 03:30:00.059323 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.059672 kubelet[3199]: E1216 03:30:00.059645 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.059672 kubelet[3199]: W1216 03:30:00.059659 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.060228 kubelet[3199]: E1216 03:30:00.060126 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.060455 kubelet[3199]: E1216 03:30:00.060431 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.060455 kubelet[3199]: W1216 03:30:00.060448 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.063129 kubelet[3199]: E1216 03:30:00.062075 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.065157 kubelet[3199]: E1216 03:30:00.063273 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.065157 kubelet[3199]: W1216 03:30:00.063321 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.065803 kubelet[3199]: E1216 03:30:00.065349 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.066370 kubelet[3199]: E1216 03:30:00.066337 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.066370 kubelet[3199]: W1216 03:30:00.066361 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.066552 kubelet[3199]: E1216 03:30:00.066524 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.067036 kubelet[3199]: E1216 03:30:00.066991 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.067036 kubelet[3199]: W1216 03:30:00.067018 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.067325 kubelet[3199]: E1216 03:30:00.067110 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.067538 kubelet[3199]: E1216 03:30:00.067522 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.067625 kubelet[3199]: W1216 03:30:00.067538 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.068612 kubelet[3199]: E1216 03:30:00.068126 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.068612 kubelet[3199]: E1216 03:30:00.068212 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.068612 kubelet[3199]: W1216 03:30:00.068223 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.068612 kubelet[3199]: E1216 03:30:00.068347 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.070332 kubelet[3199]: E1216 03:30:00.068965 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.070332 kubelet[3199]: W1216 03:30:00.068980 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.070332 kubelet[3199]: E1216 03:30:00.069205 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.070332 kubelet[3199]: E1216 03:30:00.069426 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.070332 kubelet[3199]: W1216 03:30:00.069437 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.070332 kubelet[3199]: E1216 03:30:00.069471 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.070332 kubelet[3199]: E1216 03:30:00.070274 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.070332 kubelet[3199]: W1216 03:30:00.070287 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.070332 kubelet[3199]: E1216 03:30:00.070326 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.070818 kubelet[3199]: E1216 03:30:00.070801 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.070818 kubelet[3199]: W1216 03:30:00.070818 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.070920 kubelet[3199]: E1216 03:30:00.070838 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.088000 audit: BPF prog-id=166 op=LOAD Dec 16 03:30:00.096000 audit: BPF prog-id=167 op=LOAD Dec 16 03:30:00.096000 audit[3983]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3972 pid=3983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:00.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336303465316635653063313736376363643936363133376234303861 Dec 16 03:30:00.097000 audit: BPF prog-id=167 op=UNLOAD Dec 16 03:30:00.097000 audit[3983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3972 pid=3983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:00.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336303465316635653063313736376363643936363133376234303861 Dec 16 03:30:00.097000 audit: BPF prog-id=168 op=LOAD Dec 16 03:30:00.097000 audit[3983]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3972 pid=3983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:00.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336303465316635653063313736376363643936363133376234303861 Dec 16 03:30:00.097000 audit: BPF prog-id=169 op=LOAD Dec 16 03:30:00.097000 audit[3983]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3972 pid=3983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:00.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336303465316635653063313736376363643936363133376234303861 Dec 16 03:30:00.098000 audit: BPF prog-id=169 op=UNLOAD Dec 16 03:30:00.098000 audit[3983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3972 pid=3983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:00.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336303465316635653063313736376363643936363133376234303861 Dec 16 03:30:00.098000 audit: BPF prog-id=168 op=UNLOAD Dec 16 03:30:00.098000 audit[3983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3972 pid=3983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:00.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336303465316635653063313736376363643936363133376234303861 Dec 16 03:30:00.098000 audit: BPF prog-id=170 op=LOAD Dec 16 03:30:00.098000 audit[3983]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3972 pid=3983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:00.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336303465316635653063313736376363643936363133376234303861 Dec 16 03:30:00.105907 kubelet[3199]: E1216 03:30:00.105877 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:00.105996 kubelet[3199]: W1216 03:30:00.105909 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:00.105996 kubelet[3199]: E1216 03:30:00.105979 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:00.179006 containerd[1851]: time="2025-12-16T03:30:00.176350161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h99kj,Uid:9c9cc891-255e-45da-9628-24ee399ddb12,Namespace:calico-system,Attempt:0,} returns sandbox id \"c604e1f5e0c1767ccd966137b408a7131942a0c0b98fb6a7a300b5ca75d56e75\"" Dec 16 03:30:00.320000 audit[4037]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4037 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:00.320000 audit[4037]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff56f47210 a2=0 a3=7fff56f471fc items=0 ppid=3301 pid=4037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:00.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:00.337000 audit[4037]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4037 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:00.337000 audit[4037]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff56f47210 a2=0 a3=0 items=0 ppid=3301 pid=4037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:00.337000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:01.386471 kubelet[3199]: E1216 03:30:01.379456 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:30:02.712329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3443122109.mount: Deactivated successfully. Dec 16 03:30:03.395699 kubelet[3199]: E1216 03:30:03.395542 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:30:04.426972 containerd[1851]: time="2025-12-16T03:30:04.426920003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:30:04.433952 containerd[1851]: time="2025-12-16T03:30:04.433906927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 03:30:04.436909 containerd[1851]: time="2025-12-16T03:30:04.436739294Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:30:04.442142 containerd[1851]: time="2025-12-16T03:30:04.442074094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:30:04.443871 containerd[1851]: time="2025-12-16T03:30:04.443642125Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.551061253s" Dec 16 03:30:04.443871 containerd[1851]: time="2025-12-16T03:30:04.443696047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 03:30:04.445446 containerd[1851]: time="2025-12-16T03:30:04.445414277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 03:30:04.503480 containerd[1851]: time="2025-12-16T03:30:04.503429680Z" level=info msg="CreateContainer within sandbox \"e103a0288cc74246b44f03331e84dff8ed75ce0a4507ecefa88f375e6406a087\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 03:30:04.532016 containerd[1851]: time="2025-12-16T03:30:04.522780944Z" level=info msg="Container c96fa8ebf244412b07e1896833184550c1e40f30a67daaa6d6c96ed4a78ce861: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:30:04.547661 containerd[1851]: time="2025-12-16T03:30:04.547622605Z" level=info msg="CreateContainer within sandbox \"e103a0288cc74246b44f03331e84dff8ed75ce0a4507ecefa88f375e6406a087\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c96fa8ebf244412b07e1896833184550c1e40f30a67daaa6d6c96ed4a78ce861\"" Dec 16 03:30:04.548497 containerd[1851]: time="2025-12-16T03:30:04.548462640Z" level=info msg="StartContainer for \"c96fa8ebf244412b07e1896833184550c1e40f30a67daaa6d6c96ed4a78ce861\"" Dec 16 03:30:04.551236 containerd[1851]: time="2025-12-16T03:30:04.551133024Z" level=info msg="connecting to shim c96fa8ebf244412b07e1896833184550c1e40f30a67daaa6d6c96ed4a78ce861" address="unix:///run/containerd/s/e8cc3bd724bcaca2310db2b739ea2e329647d0bc5bd9aa4f7eacd8e9bee8c64a" protocol=ttrpc version=3 Dec 16 03:30:04.626455 systemd[1]: Started cri-containerd-c96fa8ebf244412b07e1896833184550c1e40f30a67daaa6d6c96ed4a78ce861.scope - libcontainer container c96fa8ebf244412b07e1896833184550c1e40f30a67daaa6d6c96ed4a78ce861. Dec 16 03:30:04.645128 kernel: kauditd_printk_skb: 52 callbacks suppressed Dec 16 03:30:04.645281 kernel: audit: type=1334 audit(1765855804.641:571): prog-id=171 op=LOAD Dec 16 03:30:04.641000 audit: BPF prog-id=171 op=LOAD Dec 16 03:30:04.641000 audit: BPF prog-id=172 op=LOAD Dec 16 03:30:04.647317 kernel: audit: type=1334 audit(1765855804.641:572): prog-id=172 op=LOAD Dec 16 03:30:04.655274 kernel: audit: type=1300 audit(1765855804.641:572): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3884 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:04.641000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3884 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:04.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366661386562663234343431326230376531383936383333313834 Dec 16 03:30:04.663423 kernel: audit: type=1327 audit(1765855804.641:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366661386562663234343431326230376531383936383333313834 Dec 16 03:30:04.663548 kernel: audit: type=1334 audit(1765855804.642:573): prog-id=172 op=UNLOAD Dec 16 03:30:04.642000 audit: BPF prog-id=172 op=UNLOAD Dec 16 03:30:04.669786 kernel: audit: type=1300 audit(1765855804.642:573): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3884 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:04.642000 audit[4048]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3884 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:04.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366661386562663234343431326230376531383936383333313834 Dec 16 03:30:04.678112 kernel: audit: type=1327 audit(1765855804.642:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366661386562663234343431326230376531383936383333313834 Dec 16 03:30:04.678300 kernel: audit: type=1334 audit(1765855804.642:574): prog-id=173 op=LOAD Dec 16 03:30:04.642000 audit: BPF prog-id=173 op=LOAD Dec 16 03:30:04.642000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3884 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:04.690776 kernel: audit: type=1300 audit(1765855804.642:574): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3884 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:04.690895 kernel: audit: type=1327 audit(1765855804.642:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366661386562663234343431326230376531383936383333313834 Dec 16 03:30:04.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366661386562663234343431326230376531383936383333313834 Dec 16 03:30:04.642000 audit: BPF prog-id=174 op=LOAD Dec 16 03:30:04.642000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3884 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:04.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366661386562663234343431326230376531383936383333313834 Dec 16 03:30:04.642000 audit: BPF prog-id=174 op=UNLOAD Dec 16 03:30:04.642000 audit[4048]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3884 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:04.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366661386562663234343431326230376531383936383333313834 Dec 16 03:30:04.642000 audit: BPF prog-id=173 op=UNLOAD Dec 16 03:30:04.642000 audit[4048]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3884 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:04.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366661386562663234343431326230376531383936383333313834 Dec 16 03:30:04.642000 audit: BPF prog-id=175 op=LOAD Dec 16 03:30:04.642000 audit[4048]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3884 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:04.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339366661386562663234343431326230376531383936383333313834 Dec 16 03:30:04.715729 containerd[1851]: time="2025-12-16T03:30:04.715632607Z" level=info msg="StartContainer for \"c96fa8ebf244412b07e1896833184550c1e40f30a67daaa6d6c96ed4a78ce861\" returns successfully" Dec 16 03:30:04.741704 kubelet[3199]: E1216 03:30:04.741595 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.741704 kubelet[3199]: W1216 03:30:04.741636 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.741704 kubelet[3199]: E1216 03:30:04.741672 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.744084 kubelet[3199]: E1216 03:30:04.742427 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.744084 kubelet[3199]: W1216 03:30:04.742445 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.744084 kubelet[3199]: E1216 03:30:04.742465 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.744084 kubelet[3199]: E1216 03:30:04.742682 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.744084 kubelet[3199]: W1216 03:30:04.742692 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.744084 kubelet[3199]: E1216 03:30:04.742704 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.744084 kubelet[3199]: E1216 03:30:04.743073 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.744084 kubelet[3199]: W1216 03:30:04.743083 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.744084 kubelet[3199]: E1216 03:30:04.743096 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.744084 kubelet[3199]: E1216 03:30:04.743355 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.746072 kubelet[3199]: W1216 03:30:04.743366 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.746072 kubelet[3199]: E1216 03:30:04.743379 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.746072 kubelet[3199]: E1216 03:30:04.743567 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.746072 kubelet[3199]: W1216 03:30:04.743576 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.746072 kubelet[3199]: E1216 03:30:04.743586 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.746072 kubelet[3199]: E1216 03:30:04.743782 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.746072 kubelet[3199]: W1216 03:30:04.743793 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.746072 kubelet[3199]: E1216 03:30:04.743852 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.746072 kubelet[3199]: E1216 03:30:04.744097 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.746072 kubelet[3199]: W1216 03:30:04.744108 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.746501 kubelet[3199]: E1216 03:30:04.744120 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.746501 kubelet[3199]: E1216 03:30:04.744433 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.746501 kubelet[3199]: W1216 03:30:04.744445 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.746501 kubelet[3199]: E1216 03:30:04.744458 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.746501 kubelet[3199]: E1216 03:30:04.745373 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.746501 kubelet[3199]: W1216 03:30:04.745386 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.746501 kubelet[3199]: E1216 03:30:04.745400 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.746501 kubelet[3199]: E1216 03:30:04.745642 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.746501 kubelet[3199]: W1216 03:30:04.745651 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.746501 kubelet[3199]: E1216 03:30:04.745663 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.746902 kubelet[3199]: E1216 03:30:04.745908 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.746902 kubelet[3199]: W1216 03:30:04.745918 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.746902 kubelet[3199]: E1216 03:30:04.745930 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.746902 kubelet[3199]: E1216 03:30:04.746836 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.746902 kubelet[3199]: W1216 03:30:04.746848 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.746902 kubelet[3199]: E1216 03:30:04.746862 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.747165 kubelet[3199]: E1216 03:30:04.747048 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.747165 kubelet[3199]: W1216 03:30:04.747057 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.747165 kubelet[3199]: E1216 03:30:04.747068 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.747165 kubelet[3199]: E1216 03:30:04.747272 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.747165 kubelet[3199]: W1216 03:30:04.747282 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.747165 kubelet[3199]: E1216 03:30:04.747293 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.747165 kubelet[3199]: E1216 03:30:04.747775 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.747165 kubelet[3199]: W1216 03:30:04.747788 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.747165 kubelet[3199]: E1216 03:30:04.747802 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.752480 kubelet[3199]: E1216 03:30:04.752448 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.752480 kubelet[3199]: W1216 03:30:04.752477 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.752665 kubelet[3199]: E1216 03:30:04.752506 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.753609 kubelet[3199]: E1216 03:30:04.752783 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.753609 kubelet[3199]: W1216 03:30:04.752797 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.753609 kubelet[3199]: E1216 03:30:04.752824 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.753609 kubelet[3199]: E1216 03:30:04.753056 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.753609 kubelet[3199]: W1216 03:30:04.753066 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.753609 kubelet[3199]: E1216 03:30:04.753092 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.756642 kubelet[3199]: E1216 03:30:04.756614 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.756642 kubelet[3199]: W1216 03:30:04.756641 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.758192 kubelet[3199]: E1216 03:30:04.757234 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.759761 kubelet[3199]: E1216 03:30:04.758959 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.759761 kubelet[3199]: W1216 03:30:04.759082 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.759761 kubelet[3199]: E1216 03:30:04.759109 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.759761 kubelet[3199]: E1216 03:30:04.759438 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.759761 kubelet[3199]: W1216 03:30:04.759450 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.759761 kubelet[3199]: E1216 03:30:04.759606 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.760993 kubelet[3199]: E1216 03:30:04.760582 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.760993 kubelet[3199]: W1216 03:30:04.760597 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.760993 kubelet[3199]: E1216 03:30:04.760617 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.763432 kubelet[3199]: E1216 03:30:04.763408 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.763432 kubelet[3199]: W1216 03:30:04.763432 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.763601 kubelet[3199]: E1216 03:30:04.763484 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.763830 kubelet[3199]: E1216 03:30:04.763809 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.763909 kubelet[3199]: W1216 03:30:04.763836 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.763955 kubelet[3199]: E1216 03:30:04.763914 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.765253 kubelet[3199]: E1216 03:30:04.765232 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.765253 kubelet[3199]: W1216 03:30:04.765253 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.765913 kubelet[3199]: E1216 03:30:04.765714 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.765913 kubelet[3199]: W1216 03:30:04.765727 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.765913 kubelet[3199]: E1216 03:30:04.765771 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.766457 kubelet[3199]: E1216 03:30:04.766430 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.767377 kubelet[3199]: E1216 03:30:04.767353 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.767377 kubelet[3199]: W1216 03:30:04.767371 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.767506 kubelet[3199]: E1216 03:30:04.767417 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.768540 kubelet[3199]: E1216 03:30:04.768411 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.768540 kubelet[3199]: W1216 03:30:04.768436 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.768540 kubelet[3199]: E1216 03:30:04.768472 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.773892 kubelet[3199]: E1216 03:30:04.769295 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.773892 kubelet[3199]: W1216 03:30:04.769311 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.773892 kubelet[3199]: E1216 03:30:04.769331 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.773892 kubelet[3199]: E1216 03:30:04.770498 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.773892 kubelet[3199]: W1216 03:30:04.770513 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.773892 kubelet[3199]: E1216 03:30:04.771145 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.773892 kubelet[3199]: E1216 03:30:04.773409 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.773892 kubelet[3199]: W1216 03:30:04.773427 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.773892 kubelet[3199]: E1216 03:30:04.773448 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:04.775657 kubelet[3199]: E1216 03:30:04.775249 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:04.775657 kubelet[3199]: W1216 03:30:04.775591 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:04.776897 kubelet[3199]: E1216 03:30:04.775972 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.379848 kubelet[3199]: E1216 03:30:05.379596 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:30:05.730544 kubelet[3199]: I1216 03:30:05.730508 3199 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:30:05.757099 kubelet[3199]: E1216 03:30:05.757019 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.757099 kubelet[3199]: W1216 03:30:05.757049 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.757099 kubelet[3199]: E1216 03:30:05.757073 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.757775 kubelet[3199]: E1216 03:30:05.757417 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.757775 kubelet[3199]: W1216 03:30:05.757428 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.757775 kubelet[3199]: E1216 03:30:05.757444 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.757775 kubelet[3199]: E1216 03:30:05.757708 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.757775 kubelet[3199]: W1216 03:30:05.757718 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.757775 kubelet[3199]: E1216 03:30:05.757731 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.758092 kubelet[3199]: E1216 03:30:05.758057 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.758092 kubelet[3199]: W1216 03:30:05.758075 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.758092 kubelet[3199]: E1216 03:30:05.758090 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.758346 kubelet[3199]: E1216 03:30:05.758339 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.758408 kubelet[3199]: W1216 03:30:05.758350 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.758408 kubelet[3199]: E1216 03:30:05.758363 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.758594 kubelet[3199]: E1216 03:30:05.758568 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.758594 kubelet[3199]: W1216 03:30:05.758585 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.758730 kubelet[3199]: E1216 03:30:05.758599 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.758810 kubelet[3199]: E1216 03:30:05.758795 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.758867 kubelet[3199]: W1216 03:30:05.758809 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.758867 kubelet[3199]: E1216 03:30:05.758821 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.759070 kubelet[3199]: E1216 03:30:05.759051 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.759070 kubelet[3199]: W1216 03:30:05.759066 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.759290 kubelet[3199]: E1216 03:30:05.759081 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.759353 kubelet[3199]: E1216 03:30:05.759315 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.759353 kubelet[3199]: W1216 03:30:05.759326 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.759353 kubelet[3199]: E1216 03:30:05.759338 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.759616 kubelet[3199]: E1216 03:30:05.759603 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.759691 kubelet[3199]: W1216 03:30:05.759615 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.759691 kubelet[3199]: E1216 03:30:05.759629 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.759841 kubelet[3199]: E1216 03:30:05.759822 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.759841 kubelet[3199]: W1216 03:30:05.759837 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.759950 kubelet[3199]: E1216 03:30:05.759849 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.760059 kubelet[3199]: E1216 03:30:05.760042 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.760059 kubelet[3199]: W1216 03:30:05.760055 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.760533 kubelet[3199]: E1216 03:30:05.760067 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.760584 kubelet[3199]: E1216 03:30:05.760568 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.760584 kubelet[3199]: W1216 03:30:05.760580 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.760675 kubelet[3199]: E1216 03:30:05.760594 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.760926 kubelet[3199]: E1216 03:30:05.760809 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.760926 kubelet[3199]: W1216 03:30:05.760824 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.760926 kubelet[3199]: E1216 03:30:05.760837 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.761114 kubelet[3199]: E1216 03:30:05.761096 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.761114 kubelet[3199]: W1216 03:30:05.761110 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.761251 kubelet[3199]: E1216 03:30:05.761123 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.787367 kubelet[3199]: E1216 03:30:05.787324 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.787367 kubelet[3199]: W1216 03:30:05.787355 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.787598 kubelet[3199]: E1216 03:30:05.787381 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.787833 kubelet[3199]: E1216 03:30:05.787696 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.787833 kubelet[3199]: W1216 03:30:05.787712 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.787833 kubelet[3199]: E1216 03:30:05.787730 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.788076 kubelet[3199]: E1216 03:30:05.788053 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.788076 kubelet[3199]: W1216 03:30:05.788072 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.788076 kubelet[3199]: E1216 03:30:05.788195 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.788595 kubelet[3199]: E1216 03:30:05.788572 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.788595 kubelet[3199]: W1216 03:30:05.788592 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.788758 kubelet[3199]: E1216 03:30:05.788617 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.788833 kubelet[3199]: E1216 03:30:05.788818 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.788968 kubelet[3199]: W1216 03:30:05.788832 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.788968 kubelet[3199]: E1216 03:30:05.788859 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.789225 kubelet[3199]: E1216 03:30:05.789031 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.789225 kubelet[3199]: W1216 03:30:05.789042 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.789225 kubelet[3199]: E1216 03:30:05.789067 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.789379 kubelet[3199]: E1216 03:30:05.789267 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.789379 kubelet[3199]: W1216 03:30:05.789277 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.789838 kubelet[3199]: E1216 03:30:05.789494 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.789838 kubelet[3199]: W1216 03:30:05.789506 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.789838 kubelet[3199]: E1216 03:30:05.789518 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.789838 kubelet[3199]: E1216 03:30:05.789518 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.789838 kubelet[3199]: E1216 03:30:05.789694 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.789838 kubelet[3199]: W1216 03:30:05.789704 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.789838 kubelet[3199]: E1216 03:30:05.789716 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.790210 kubelet[3199]: E1216 03:30:05.789904 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.790210 kubelet[3199]: W1216 03:30:05.789913 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.790210 kubelet[3199]: E1216 03:30:05.789925 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.790210 kubelet[3199]: E1216 03:30:05.790089 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.790210 kubelet[3199]: W1216 03:30:05.790097 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.790210 kubelet[3199]: E1216 03:30:05.790108 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.790490 kubelet[3199]: E1216 03:30:05.790329 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.790490 kubelet[3199]: W1216 03:30:05.790338 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.790490 kubelet[3199]: E1216 03:30:05.790353 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.790757 kubelet[3199]: E1216 03:30:05.790736 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.790757 kubelet[3199]: W1216 03:30:05.790751 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.790881 kubelet[3199]: E1216 03:30:05.790837 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.791062 kubelet[3199]: E1216 03:30:05.791042 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.791062 kubelet[3199]: W1216 03:30:05.791057 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.791253 kubelet[3199]: E1216 03:30:05.791076 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.791336 kubelet[3199]: E1216 03:30:05.791313 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.791336 kubelet[3199]: W1216 03:30:05.791333 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.791542 kubelet[3199]: E1216 03:30:05.791347 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.791653 kubelet[3199]: E1216 03:30:05.791636 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.791653 kubelet[3199]: W1216 03:30:05.791650 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.791745 kubelet[3199]: E1216 03:30:05.791663 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.791934 kubelet[3199]: E1216 03:30:05.791916 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.791934 kubelet[3199]: W1216 03:30:05.791930 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.792040 kubelet[3199]: E1216 03:30:05.791944 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:05.792704 kubelet[3199]: E1216 03:30:05.792686 3199 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:30:05.792704 kubelet[3199]: W1216 03:30:05.792700 3199 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:30:05.792922 kubelet[3199]: E1216 03:30:05.792714 3199 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:30:06.002502 containerd[1851]: time="2025-12-16T03:30:06.002356778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:30:06.009284 containerd[1851]: time="2025-12-16T03:30:06.009126530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:06.011949 containerd[1851]: time="2025-12-16T03:30:06.011502229Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:30:06.020527 containerd[1851]: time="2025-12-16T03:30:06.020037766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:30:06.022894 containerd[1851]: time="2025-12-16T03:30:06.022826173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.577212152s" Dec 16 03:30:06.022894 containerd[1851]: time="2025-12-16T03:30:06.022883285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 03:30:06.029634 containerd[1851]: time="2025-12-16T03:30:06.029587514Z" level=info msg="CreateContainer within sandbox \"c604e1f5e0c1767ccd966137b408a7131942a0c0b98fb6a7a300b5ca75d56e75\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 03:30:06.047890 containerd[1851]: time="2025-12-16T03:30:06.047597956Z" level=info msg="Container ca2225f5699755412626b70974594e0867bdfc5b02154aff305296faa90f999f: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:30:06.071483 containerd[1851]: time="2025-12-16T03:30:06.071346815Z" level=info msg="CreateContainer within sandbox \"c604e1f5e0c1767ccd966137b408a7131942a0c0b98fb6a7a300b5ca75d56e75\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ca2225f5699755412626b70974594e0867bdfc5b02154aff305296faa90f999f\"" Dec 16 03:30:06.072356 containerd[1851]: time="2025-12-16T03:30:06.072303664Z" level=info msg="StartContainer for \"ca2225f5699755412626b70974594e0867bdfc5b02154aff305296faa90f999f\"" Dec 16 03:30:06.075812 containerd[1851]: time="2025-12-16T03:30:06.075711937Z" level=info msg="connecting to shim ca2225f5699755412626b70974594e0867bdfc5b02154aff305296faa90f999f" address="unix:///run/containerd/s/0784cc1695a03b3635eac28cd30895e50552857ece8073cb596f396c2006152d" protocol=ttrpc version=3 Dec 16 03:30:06.114500 systemd[1]: Started cri-containerd-ca2225f5699755412626b70974594e0867bdfc5b02154aff305296faa90f999f.scope - libcontainer container ca2225f5699755412626b70974594e0867bdfc5b02154aff305296faa90f999f. Dec 16 03:30:06.174000 audit: BPF prog-id=176 op=LOAD Dec 16 03:30:06.174000 audit[4159]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3972 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:06.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361323232356635363939373535343132363236623730393734353934 Dec 16 03:30:06.175000 audit: BPF prog-id=177 op=LOAD Dec 16 03:30:06.175000 audit[4159]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3972 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:06.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361323232356635363939373535343132363236623730393734353934 Dec 16 03:30:06.175000 audit: BPF prog-id=177 op=UNLOAD Dec 16 03:30:06.175000 audit[4159]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3972 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:06.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361323232356635363939373535343132363236623730393734353934 Dec 16 03:30:06.175000 audit: BPF prog-id=176 op=UNLOAD Dec 16 03:30:06.175000 audit[4159]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3972 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:06.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361323232356635363939373535343132363236623730393734353934 Dec 16 03:30:06.175000 audit: BPF prog-id=178 op=LOAD Dec 16 03:30:06.175000 audit[4159]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3972 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:06.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361323232356635363939373535343132363236623730393734353934 Dec 16 03:30:06.208159 containerd[1851]: time="2025-12-16T03:30:06.208029039Z" level=info msg="StartContainer for \"ca2225f5699755412626b70974594e0867bdfc5b02154aff305296faa90f999f\" returns successfully" Dec 16 03:30:06.218063 systemd[1]: cri-containerd-ca2225f5699755412626b70974594e0867bdfc5b02154aff305296faa90f999f.scope: Deactivated successfully. Dec 16 03:30:06.220000 audit: BPF prog-id=178 op=UNLOAD Dec 16 03:30:06.261471 containerd[1851]: time="2025-12-16T03:30:06.261328700Z" level=info msg="received container exit event container_id:\"ca2225f5699755412626b70974594e0867bdfc5b02154aff305296faa90f999f\" id:\"ca2225f5699755412626b70974594e0867bdfc5b02154aff305296faa90f999f\" pid:4171 exited_at:{seconds:1765855806 nanos:225489926}" Dec 16 03:30:06.293558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca2225f5699755412626b70974594e0867bdfc5b02154aff305296faa90f999f-rootfs.mount: Deactivated successfully. Dec 16 03:30:06.739392 containerd[1851]: time="2025-12-16T03:30:06.739349597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 03:30:06.759872 kubelet[3199]: I1216 03:30:06.759800 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-85964f4bc5-4jbhc" podStartSLOduration=3.205562463 podStartE2EDuration="7.759780558s" podCreationTimestamp="2025-12-16 03:29:59 +0000 UTC" firstStartedPulling="2025-12-16 03:29:59.890721417 +0000 UTC m=+24.719656736" lastFinishedPulling="2025-12-16 03:30:04.444939508 +0000 UTC m=+29.273874831" observedRunningTime="2025-12-16 03:30:04.778656345 +0000 UTC m=+29.607591677" watchObservedRunningTime="2025-12-16 03:30:06.759780558 +0000 UTC m=+31.588715982" Dec 16 03:30:07.382469 kubelet[3199]: E1216 03:30:07.381399 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:30:07.684262 kubelet[3199]: I1216 03:30:07.684206 3199 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:30:07.743000 audit[4208]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:07.743000 audit[4208]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe6f6c2520 a2=0 a3=7ffe6f6c250c items=0 ppid=3301 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:07.743000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:07.755000 audit[4208]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4208 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:07.755000 audit[4208]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe6f6c2520 a2=0 a3=7ffe6f6c250c items=0 ppid=3301 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:07.755000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:09.379366 kubelet[3199]: E1216 03:30:09.378894 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:30:11.379227 kubelet[3199]: E1216 03:30:11.378908 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:30:13.280601 containerd[1851]: time="2025-12-16T03:30:13.280543416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:30:13.282782 containerd[1851]: time="2025-12-16T03:30:13.282566727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 03:30:13.285121 containerd[1851]: time="2025-12-16T03:30:13.285056088Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:30:13.289464 containerd[1851]: time="2025-12-16T03:30:13.288771203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:30:13.289464 containerd[1851]: time="2025-12-16T03:30:13.289328014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.549931934s" Dec 16 03:30:13.289464 containerd[1851]: time="2025-12-16T03:30:13.289357711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 03:30:13.341376 containerd[1851]: time="2025-12-16T03:30:13.341323255Z" level=info msg="CreateContainer within sandbox \"c604e1f5e0c1767ccd966137b408a7131942a0c0b98fb6a7a300b5ca75d56e75\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 03:30:13.365566 containerd[1851]: time="2025-12-16T03:30:13.365460939Z" level=info msg="Container e50595d596e789e239e326a357c1a471e230fb4da96a18820c5212c2cadfbf62: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:30:13.378385 kubelet[3199]: E1216 03:30:13.377980 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:30:13.389921 containerd[1851]: time="2025-12-16T03:30:13.389872725Z" level=info msg="CreateContainer within sandbox \"c604e1f5e0c1767ccd966137b408a7131942a0c0b98fb6a7a300b5ca75d56e75\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e50595d596e789e239e326a357c1a471e230fb4da96a18820c5212c2cadfbf62\"" Dec 16 03:30:13.392193 containerd[1851]: time="2025-12-16T03:30:13.391331534Z" level=info msg="StartContainer for \"e50595d596e789e239e326a357c1a471e230fb4da96a18820c5212c2cadfbf62\"" Dec 16 03:30:13.414090 containerd[1851]: time="2025-12-16T03:30:13.393611731Z" level=info msg="connecting to shim e50595d596e789e239e326a357c1a471e230fb4da96a18820c5212c2cadfbf62" address="unix:///run/containerd/s/0784cc1695a03b3635eac28cd30895e50552857ece8073cb596f396c2006152d" protocol=ttrpc version=3 Dec 16 03:30:13.423427 systemd[1]: Started cri-containerd-e50595d596e789e239e326a357c1a471e230fb4da96a18820c5212c2cadfbf62.scope - libcontainer container e50595d596e789e239e326a357c1a471e230fb4da96a18820c5212c2cadfbf62. Dec 16 03:30:13.514208 kernel: kauditd_printk_skb: 34 callbacks suppressed Dec 16 03:30:13.514346 kernel: audit: type=1334 audit(1765855813.511:587): prog-id=179 op=LOAD Dec 16 03:30:13.511000 audit: BPF prog-id=179 op=LOAD Dec 16 03:30:13.511000 audit[4220]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3972 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:13.517220 kernel: audit: type=1300 audit(1765855813.511:587): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3972 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:13.521419 kernel: audit: type=1327 audit(1765855813.511:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303539356435393665373839653233396533323661333537633161 Dec 16 03:30:13.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303539356435393665373839653233396533323661333537633161 Dec 16 03:30:13.511000 audit: BPF prog-id=180 op=LOAD Dec 16 03:30:13.511000 audit[4220]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3972 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:13.531108 kernel: audit: type=1334 audit(1765855813.511:588): prog-id=180 op=LOAD Dec 16 03:30:13.531483 kernel: audit: type=1300 audit(1765855813.511:588): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3972 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:13.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303539356435393665373839653233396533323661333537633161 Dec 16 03:30:13.511000 audit: BPF prog-id=180 op=UNLOAD Dec 16 03:30:13.542359 kernel: audit: type=1327 audit(1765855813.511:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303539356435393665373839653233396533323661333537633161 Dec 16 03:30:13.542408 kernel: audit: type=1334 audit(1765855813.511:589): prog-id=180 op=UNLOAD Dec 16 03:30:13.543192 kernel: audit: type=1300 audit(1765855813.511:589): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3972 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:13.511000 audit[4220]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3972 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:13.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303539356435393665373839653233396533323661333537633161 Dec 16 03:30:13.551392 kernel: audit: type=1327 audit(1765855813.511:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303539356435393665373839653233396533323661333537633161 Dec 16 03:30:13.511000 audit: BPF prog-id=179 op=UNLOAD Dec 16 03:30:13.555904 kernel: audit: type=1334 audit(1765855813.511:590): prog-id=179 op=UNLOAD Dec 16 03:30:13.511000 audit[4220]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3972 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:13.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303539356435393665373839653233396533323661333537633161 Dec 16 03:30:13.511000 audit: BPF prog-id=181 op=LOAD Dec 16 03:30:13.511000 audit[4220]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3972 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:13.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303539356435393665373839653233396533323661333537633161 Dec 16 03:30:13.572144 containerd[1851]: time="2025-12-16T03:30:13.572097469Z" level=info msg="StartContainer for \"e50595d596e789e239e326a357c1a471e230fb4da96a18820c5212c2cadfbf62\" returns successfully" Dec 16 03:30:14.911786 systemd[1]: cri-containerd-e50595d596e789e239e326a357c1a471e230fb4da96a18820c5212c2cadfbf62.scope: Deactivated successfully. Dec 16 03:30:14.916352 systemd[1]: cri-containerd-e50595d596e789e239e326a357c1a471e230fb4da96a18820c5212c2cadfbf62.scope: Consumed 632ms CPU time, 158.3M memory peak, 7.1M read from disk, 171.3M written to disk. Dec 16 03:30:14.915000 audit: BPF prog-id=181 op=UNLOAD Dec 16 03:30:14.947344 containerd[1851]: time="2025-12-16T03:30:14.947300601Z" level=info msg="received container exit event container_id:\"e50595d596e789e239e326a357c1a471e230fb4da96a18820c5212c2cadfbf62\" id:\"e50595d596e789e239e326a357c1a471e230fb4da96a18820c5212c2cadfbf62\" pid:4234 exited_at:{seconds:1765855814 nanos:946297350}" Dec 16 03:30:14.989774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e50595d596e789e239e326a357c1a471e230fb4da96a18820c5212c2cadfbf62-rootfs.mount: Deactivated successfully. Dec 16 03:30:15.030203 kubelet[3199]: I1216 03:30:15.029431 3199 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 03:30:15.147020 systemd[1]: Created slice kubepods-besteffort-podcdf7ec9a_0f7c_449b_9376_44e251f47cc8.slice - libcontainer container kubepods-besteffort-podcdf7ec9a_0f7c_449b_9376_44e251f47cc8.slice. Dec 16 03:30:15.158223 systemd[1]: Created slice kubepods-besteffort-pod69c31889_528a_4f2f_822c_6d89b94291a9.slice - libcontainer container kubepods-besteffort-pod69c31889_528a_4f2f_822c_6d89b94291a9.slice. Dec 16 03:30:15.172314 systemd[1]: Created slice kubepods-besteffort-pod0feff437_5720_498b_a3c1_fbae9f5f245c.slice - libcontainer container kubepods-besteffort-pod0feff437_5720_498b_a3c1_fbae9f5f245c.slice. Dec 16 03:30:15.173424 kubelet[3199]: I1216 03:30:15.172435 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt978\" (UniqueName: \"kubernetes.io/projected/56b458df-a303-4716-b4d2-e294c54eff31-kube-api-access-qt978\") pod \"coredns-668d6bf9bc-lgkx9\" (UID: \"56b458df-a303-4716-b4d2-e294c54eff31\") " pod="kube-system/coredns-668d6bf9bc-lgkx9" Dec 16 03:30:15.173424 kubelet[3199]: I1216 03:30:15.172478 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdf7ec9a-0f7c-449b-9376-44e251f47cc8-whisker-ca-bundle\") pod \"whisker-57d844d687-dk2jq\" (UID: \"cdf7ec9a-0f7c-449b-9376-44e251f47cc8\") " pod="calico-system/whisker-57d844d687-dk2jq" Dec 16 03:30:15.173424 kubelet[3199]: I1216 03:30:15.172509 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvbg9\" (UniqueName: \"kubernetes.io/projected/1598e409-2173-4fc3-8415-b507d5511623-kube-api-access-wvbg9\") pod \"goldmane-666569f655-95jjv\" (UID: \"1598e409-2173-4fc3-8415-b507d5511623\") " pod="calico-system/goldmane-666569f655-95jjv" Dec 16 03:30:15.173424 kubelet[3199]: I1216 03:30:15.172540 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68pqq\" (UniqueName: \"kubernetes.io/projected/69c31889-528a-4f2f-822c-6d89b94291a9-kube-api-access-68pqq\") pod \"calico-kube-controllers-6b698bdfc8-x42pm\" (UID: \"69c31889-528a-4f2f-822c-6d89b94291a9\") " pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" Dec 16 03:30:15.173424 kubelet[3199]: I1216 03:30:15.172569 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1598e409-2173-4fc3-8415-b507d5511623-goldmane-ca-bundle\") pod \"goldmane-666569f655-95jjv\" (UID: \"1598e409-2173-4fc3-8415-b507d5511623\") " pod="calico-system/goldmane-666569f655-95jjv" Dec 16 03:30:15.173582 kubelet[3199]: I1216 03:30:15.172633 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cdf7ec9a-0f7c-449b-9376-44e251f47cc8-whisker-backend-key-pair\") pod \"whisker-57d844d687-dk2jq\" (UID: \"cdf7ec9a-0f7c-449b-9376-44e251f47cc8\") " pod="calico-system/whisker-57d844d687-dk2jq" Dec 16 03:30:15.173582 kubelet[3199]: I1216 03:30:15.172665 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77fp8\" (UniqueName: \"kubernetes.io/projected/cdf7ec9a-0f7c-449b-9376-44e251f47cc8-kube-api-access-77fp8\") pod \"whisker-57d844d687-dk2jq\" (UID: \"cdf7ec9a-0f7c-449b-9376-44e251f47cc8\") " pod="calico-system/whisker-57d844d687-dk2jq" Dec 16 03:30:15.173582 kubelet[3199]: I1216 03:30:15.172700 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69c31889-528a-4f2f-822c-6d89b94291a9-tigera-ca-bundle\") pod \"calico-kube-controllers-6b698bdfc8-x42pm\" (UID: \"69c31889-528a-4f2f-822c-6d89b94291a9\") " pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" Dec 16 03:30:15.173582 kubelet[3199]: I1216 03:30:15.172725 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1598e409-2173-4fc3-8415-b507d5511623-config\") pod \"goldmane-666569f655-95jjv\" (UID: \"1598e409-2173-4fc3-8415-b507d5511623\") " pod="calico-system/goldmane-666569f655-95jjv" Dec 16 03:30:15.173582 kubelet[3199]: I1216 03:30:15.172752 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e32d83e7-8260-42ce-a13a-b8e2a7a65181-calico-apiserver-certs\") pod \"calico-apiserver-5f46b988d5-xtbpb\" (UID: \"e32d83e7-8260-42ce-a13a-b8e2a7a65181\") " pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" Dec 16 03:30:15.173714 kubelet[3199]: I1216 03:30:15.172780 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdx4g\" (UniqueName: \"kubernetes.io/projected/e32d83e7-8260-42ce-a13a-b8e2a7a65181-kube-api-access-tdx4g\") pod \"calico-apiserver-5f46b988d5-xtbpb\" (UID: \"e32d83e7-8260-42ce-a13a-b8e2a7a65181\") " pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" Dec 16 03:30:15.173714 kubelet[3199]: I1216 03:30:15.172811 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c016e4c7-aae1-4c29-9a44-bb2d4d75b201-config-volume\") pod \"coredns-668d6bf9bc-bxg4k\" (UID: \"c016e4c7-aae1-4c29-9a44-bb2d4d75b201\") " pod="kube-system/coredns-668d6bf9bc-bxg4k" Dec 16 03:30:15.173714 kubelet[3199]: I1216 03:30:15.172837 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k88lr\" (UniqueName: \"kubernetes.io/projected/0feff437-5720-498b-a3c1-fbae9f5f245c-kube-api-access-k88lr\") pod \"calico-apiserver-5f46b988d5-dv5td\" (UID: \"0feff437-5720-498b-a3c1-fbae9f5f245c\") " pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" Dec 16 03:30:15.173714 kubelet[3199]: I1216 03:30:15.172869 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx8vd\" (UniqueName: \"kubernetes.io/projected/c016e4c7-aae1-4c29-9a44-bb2d4d75b201-kube-api-access-hx8vd\") pod \"coredns-668d6bf9bc-bxg4k\" (UID: \"c016e4c7-aae1-4c29-9a44-bb2d4d75b201\") " pod="kube-system/coredns-668d6bf9bc-bxg4k" Dec 16 03:30:15.173714 kubelet[3199]: I1216 03:30:15.172895 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1598e409-2173-4fc3-8415-b507d5511623-goldmane-key-pair\") pod \"goldmane-666569f655-95jjv\" (UID: \"1598e409-2173-4fc3-8415-b507d5511623\") " pod="calico-system/goldmane-666569f655-95jjv" Dec 16 03:30:15.173899 kubelet[3199]: I1216 03:30:15.172927 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56b458df-a303-4716-b4d2-e294c54eff31-config-volume\") pod \"coredns-668d6bf9bc-lgkx9\" (UID: \"56b458df-a303-4716-b4d2-e294c54eff31\") " pod="kube-system/coredns-668d6bf9bc-lgkx9" Dec 16 03:30:15.173899 kubelet[3199]: I1216 03:30:15.172952 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0feff437-5720-498b-a3c1-fbae9f5f245c-calico-apiserver-certs\") pod \"calico-apiserver-5f46b988d5-dv5td\" (UID: \"0feff437-5720-498b-a3c1-fbae9f5f245c\") " pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" Dec 16 03:30:15.186006 systemd[1]: Created slice kubepods-burstable-pod56b458df_a303_4716_b4d2_e294c54eff31.slice - libcontainer container kubepods-burstable-pod56b458df_a303_4716_b4d2_e294c54eff31.slice. Dec 16 03:30:15.196633 systemd[1]: Created slice kubepods-burstable-podc016e4c7_aae1_4c29_9a44_bb2d4d75b201.slice - libcontainer container kubepods-burstable-podc016e4c7_aae1_4c29_9a44_bb2d4d75b201.slice. Dec 16 03:30:15.210508 systemd[1]: Created slice kubepods-besteffort-pode32d83e7_8260_42ce_a13a_b8e2a7a65181.slice - libcontainer container kubepods-besteffort-pode32d83e7_8260_42ce_a13a_b8e2a7a65181.slice. Dec 16 03:30:15.219711 systemd[1]: Created slice kubepods-besteffort-pod1598e409_2173_4fc3_8415_b507d5511623.slice - libcontainer container kubepods-besteffort-pod1598e409_2173_4fc3_8415_b507d5511623.slice. Dec 16 03:30:15.395561 systemd[1]: Created slice kubepods-besteffort-pod39c8b25e_ea78_443a_855e_e43746267826.slice - libcontainer container kubepods-besteffort-pod39c8b25e_ea78_443a_855e_e43746267826.slice. Dec 16 03:30:15.398952 containerd[1851]: time="2025-12-16T03:30:15.398908564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pl4wm,Uid:39c8b25e-ea78-443a-855e-e43746267826,Namespace:calico-system,Attempt:0,}" Dec 16 03:30:15.453552 containerd[1851]: time="2025-12-16T03:30:15.453491577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d844d687-dk2jq,Uid:cdf7ec9a-0f7c-449b-9376-44e251f47cc8,Namespace:calico-system,Attempt:0,}" Dec 16 03:30:15.466683 containerd[1851]: time="2025-12-16T03:30:15.466634065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b698bdfc8-x42pm,Uid:69c31889-528a-4f2f-822c-6d89b94291a9,Namespace:calico-system,Attempt:0,}" Dec 16 03:30:15.486026 containerd[1851]: time="2025-12-16T03:30:15.485814909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f46b988d5-dv5td,Uid:0feff437-5720-498b-a3c1-fbae9f5f245c,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:30:15.497830 containerd[1851]: time="2025-12-16T03:30:15.497703733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lgkx9,Uid:56b458df-a303-4716-b4d2-e294c54eff31,Namespace:kube-system,Attempt:0,}" Dec 16 03:30:15.507419 containerd[1851]: time="2025-12-16T03:30:15.507200063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bxg4k,Uid:c016e4c7-aae1-4c29-9a44-bb2d4d75b201,Namespace:kube-system,Attempt:0,}" Dec 16 03:30:15.517489 containerd[1851]: time="2025-12-16T03:30:15.517433692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f46b988d5-xtbpb,Uid:e32d83e7-8260-42ce-a13a-b8e2a7a65181,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:30:15.525372 containerd[1851]: time="2025-12-16T03:30:15.525303046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-95jjv,Uid:1598e409-2173-4fc3-8415-b507d5511623,Namespace:calico-system,Attempt:0,}" Dec 16 03:30:15.806129 containerd[1851]: time="2025-12-16T03:30:15.805989129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 03:30:17.945292 containerd[1851]: time="2025-12-16T03:30:17.944427841Z" level=error msg="Failed to destroy network for sandbox \"420be57c2f861cddccb3d51b2057956af85bb04c9844cc9a66ff8ecb15ff4574\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:17.949550 systemd[1]: run-netns-cni\x2d147a82b9\x2d53c4\x2dcb8f\x2df0a9\x2d4db1aaae9b24.mount: Deactivated successfully. Dec 16 03:30:18.001213 containerd[1851]: time="2025-12-16T03:30:17.959461093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bxg4k,Uid:c016e4c7-aae1-4c29-9a44-bb2d4d75b201,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"420be57c2f861cddccb3d51b2057956af85bb04c9844cc9a66ff8ecb15ff4574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.004474 containerd[1851]: time="2025-12-16T03:30:18.003689746Z" level=error msg="Failed to destroy network for sandbox \"7cd1422a52a102f1b2bc1d79957829745aebc6ade391b87294b740521743543a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.008226 systemd[1]: run-netns-cni\x2df3fc8ffc\x2d5207\x2dded1\x2da51b\x2d5fa21eb326b5.mount: Deactivated successfully. Dec 16 03:30:18.011159 containerd[1851]: time="2025-12-16T03:30:18.011032171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f46b988d5-xtbpb,Uid:e32d83e7-8260-42ce-a13a-b8e2a7a65181,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd1422a52a102f1b2bc1d79957829745aebc6ade391b87294b740521743543a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.019425 containerd[1851]: time="2025-12-16T03:30:17.983022329Z" level=error msg="Failed to destroy network for sandbox \"d8a7746786d2c4573f33077822f6cb4d02e331acc7a215fbf7b1c0596551c2a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.023843 systemd[1]: run-netns-cni\x2d5b022762\x2d7fce\x2d3280\x2d3983\x2dda6bae50c317.mount: Deactivated successfully. Dec 16 03:30:18.024695 containerd[1851]: time="2025-12-16T03:30:18.024446012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d844d687-dk2jq,Uid:cdf7ec9a-0f7c-449b-9376-44e251f47cc8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8a7746786d2c4573f33077822f6cb4d02e331acc7a215fbf7b1c0596551c2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.024695 containerd[1851]: time="2025-12-16T03:30:17.983056953Z" level=error msg="Failed to destroy network for sandbox \"b359717252f4a3cd9f6ac2fd7c2dcdd83733991b205d3313c3d844fc9acd56de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.029947 containerd[1851]: time="2025-12-16T03:30:18.029892576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-95jjv,Uid:1598e409-2173-4fc3-8415-b507d5511623,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b359717252f4a3cd9f6ac2fd7c2dcdd83733991b205d3313c3d844fc9acd56de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.030351 containerd[1851]: time="2025-12-16T03:30:18.000742545Z" level=error msg="Failed to destroy network for sandbox \"3c3cd98e8a3672cbe2fb2571824372c25ada5d01b5a53339e566f375b95adc8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.031260 systemd[1]: run-netns-cni\x2d24e0b402\x2ddb86\x2dc7ef\x2d6b68\x2d3431d3ffb2ac.mount: Deactivated successfully. Dec 16 03:30:18.032125 kubelet[3199]: E1216 03:30:18.031924 3199 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420be57c2f861cddccb3d51b2057956af85bb04c9844cc9a66ff8ecb15ff4574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.034709 kubelet[3199]: E1216 03:30:18.032644 3199 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8a7746786d2c4573f33077822f6cb4d02e331acc7a215fbf7b1c0596551c2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.038701 systemd[1]: run-netns-cni\x2dccf08854\x2d3d31\x2dec0e\x2d9425\x2d90a54cbc138b.mount: Deactivated successfully. Dec 16 03:30:18.041591 kubelet[3199]: E1216 03:30:18.039811 3199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8a7746786d2c4573f33077822f6cb4d02e331acc7a215fbf7b1c0596551c2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d844d687-dk2jq" Dec 16 03:30:18.041591 kubelet[3199]: E1216 03:30:18.039858 3199 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8a7746786d2c4573f33077822f6cb4d02e331acc7a215fbf7b1c0596551c2a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d844d687-dk2jq" Dec 16 03:30:18.041591 kubelet[3199]: E1216 03:30:18.039936 3199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420be57c2f861cddccb3d51b2057956af85bb04c9844cc9a66ff8ecb15ff4574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bxg4k" Dec 16 03:30:18.041591 kubelet[3199]: E1216 03:30:18.039959 3199 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420be57c2f861cddccb3d51b2057956af85bb04c9844cc9a66ff8ecb15ff4574\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bxg4k" Dec 16 03:30:18.044778 kubelet[3199]: E1216 03:30:18.044340 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bxg4k_kube-system(c016e4c7-aae1-4c29-9a44-bb2d4d75b201)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bxg4k_kube-system(c016e4c7-aae1-4c29-9a44-bb2d4d75b201)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"420be57c2f861cddccb3d51b2057956af85bb04c9844cc9a66ff8ecb15ff4574\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bxg4k" podUID="c016e4c7-aae1-4c29-9a44-bb2d4d75b201" Dec 16 03:30:18.046740 kubelet[3199]: E1216 03:30:18.046324 3199 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd1422a52a102f1b2bc1d79957829745aebc6ade391b87294b740521743543a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.046740 kubelet[3199]: E1216 03:30:18.046387 3199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd1422a52a102f1b2bc1d79957829745aebc6ade391b87294b740521743543a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" Dec 16 03:30:18.046740 kubelet[3199]: E1216 03:30:18.046412 3199 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd1422a52a102f1b2bc1d79957829745aebc6ade391b87294b740521743543a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" Dec 16 03:30:18.046971 kubelet[3199]: E1216 03:30:18.046468 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f46b988d5-xtbpb_calico-apiserver(e32d83e7-8260-42ce-a13a-b8e2a7a65181)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f46b988d5-xtbpb_calico-apiserver(e32d83e7-8260-42ce-a13a-b8e2a7a65181)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cd1422a52a102f1b2bc1d79957829745aebc6ade391b87294b740521743543a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" podUID="e32d83e7-8260-42ce-a13a-b8e2a7a65181" Dec 16 03:30:18.046971 kubelet[3199]: E1216 03:30:18.046521 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57d844d687-dk2jq_calico-system(cdf7ec9a-0f7c-449b-9376-44e251f47cc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57d844d687-dk2jq_calico-system(cdf7ec9a-0f7c-449b-9376-44e251f47cc8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8a7746786d2c4573f33077822f6cb4d02e331acc7a215fbf7b1c0596551c2a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57d844d687-dk2jq" podUID="cdf7ec9a-0f7c-449b-9376-44e251f47cc8" Dec 16 03:30:18.057081 kubelet[3199]: E1216 03:30:18.055995 3199 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b359717252f4a3cd9f6ac2fd7c2dcdd83733991b205d3313c3d844fc9acd56de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.057081 kubelet[3199]: E1216 03:30:18.056103 3199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b359717252f4a3cd9f6ac2fd7c2dcdd83733991b205d3313c3d844fc9acd56de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-95jjv" Dec 16 03:30:18.057081 kubelet[3199]: E1216 03:30:18.056132 3199 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b359717252f4a3cd9f6ac2fd7c2dcdd83733991b205d3313c3d844fc9acd56de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-95jjv" Dec 16 03:30:18.057356 kubelet[3199]: E1216 03:30:18.056354 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-95jjv_calico-system(1598e409-2173-4fc3-8415-b507d5511623)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-95jjv_calico-system(1598e409-2173-4fc3-8415-b507d5511623)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b359717252f4a3cd9f6ac2fd7c2dcdd83733991b205d3313c3d844fc9acd56de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-95jjv" podUID="1598e409-2173-4fc3-8415-b507d5511623" Dec 16 03:30:18.087835 containerd[1851]: time="2025-12-16T03:30:18.038510015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pl4wm,Uid:39c8b25e-ea78-443a-855e-e43746267826,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3cd98e8a3672cbe2fb2571824372c25ada5d01b5a53339e566f375b95adc8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.088159 containerd[1851]: time="2025-12-16T03:30:18.000885041Z" level=error msg="Failed to destroy network for sandbox \"4604e9f8566fc7b51b5f1d4ff19073e8f09bce5d6aa8d4d677a882a5ada05e82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.094313 containerd[1851]: time="2025-12-16T03:30:18.094250371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b698bdfc8-x42pm,Uid:69c31889-528a-4f2f-822c-6d89b94291a9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4604e9f8566fc7b51b5f1d4ff19073e8f09bce5d6aa8d4d677a882a5ada05e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.094788 containerd[1851]: time="2025-12-16T03:30:18.000794010Z" level=error msg="Failed to destroy network for sandbox \"62b327ba2f47124b9113c2fa7c9567e72d4d8c1478667d0e9ad6e9acc5e5feb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.100203 containerd[1851]: time="2025-12-16T03:30:18.100127498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f46b988d5-dv5td,Uid:0feff437-5720-498b-a3c1-fbae9f5f245c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62b327ba2f47124b9113c2fa7c9567e72d4d8c1478667d0e9ad6e9acc5e5feb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.101541 containerd[1851]: time="2025-12-16T03:30:18.000860076Z" level=error msg="Failed to destroy network for sandbox \"d6af179a9133f148de2727d6c545940ce68dae8cc20df036fdae327c460f6ce2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.101672 kubelet[3199]: E1216 03:30:18.101633 3199 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62b327ba2f47124b9113c2fa7c9567e72d4d8c1478667d0e9ad6e9acc5e5feb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.101745 kubelet[3199]: E1216 03:30:18.101693 3199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62b327ba2f47124b9113c2fa7c9567e72d4d8c1478667d0e9ad6e9acc5e5feb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" Dec 16 03:30:18.101745 kubelet[3199]: E1216 03:30:18.101719 3199 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62b327ba2f47124b9113c2fa7c9567e72d4d8c1478667d0e9ad6e9acc5e5feb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" Dec 16 03:30:18.101832 kubelet[3199]: E1216 03:30:18.101772 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f46b988d5-dv5td_calico-apiserver(0feff437-5720-498b-a3c1-fbae9f5f245c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f46b988d5-dv5td_calico-apiserver(0feff437-5720-498b-a3c1-fbae9f5f245c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62b327ba2f47124b9113c2fa7c9567e72d4d8c1478667d0e9ad6e9acc5e5feb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" podUID="0feff437-5720-498b-a3c1-fbae9f5f245c" Dec 16 03:30:18.101832 kubelet[3199]: E1216 03:30:18.101821 3199 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4604e9f8566fc7b51b5f1d4ff19073e8f09bce5d6aa8d4d677a882a5ada05e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.101970 kubelet[3199]: E1216 03:30:18.101852 3199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4604e9f8566fc7b51b5f1d4ff19073e8f09bce5d6aa8d4d677a882a5ada05e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" Dec 16 03:30:18.101970 kubelet[3199]: E1216 03:30:18.101869 3199 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4604e9f8566fc7b51b5f1d4ff19073e8f09bce5d6aa8d4d677a882a5ada05e82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" Dec 16 03:30:18.101970 kubelet[3199]: E1216 03:30:18.101901 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b698bdfc8-x42pm_calico-system(69c31889-528a-4f2f-822c-6d89b94291a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b698bdfc8-x42pm_calico-system(69c31889-528a-4f2f-822c-6d89b94291a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4604e9f8566fc7b51b5f1d4ff19073e8f09bce5d6aa8d4d677a882a5ada05e82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" podUID="69c31889-528a-4f2f-822c-6d89b94291a9" Dec 16 03:30:18.102128 kubelet[3199]: E1216 03:30:18.101943 3199 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3cd98e8a3672cbe2fb2571824372c25ada5d01b5a53339e566f375b95adc8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.102128 kubelet[3199]: E1216 03:30:18.101966 3199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3cd98e8a3672cbe2fb2571824372c25ada5d01b5a53339e566f375b95adc8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pl4wm" Dec 16 03:30:18.102128 kubelet[3199]: E1216 03:30:18.101988 3199 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3cd98e8a3672cbe2fb2571824372c25ada5d01b5a53339e566f375b95adc8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pl4wm" Dec 16 03:30:18.139728 kubelet[3199]: E1216 03:30:18.102023 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pl4wm_calico-system(39c8b25e-ea78-443a-855e-e43746267826)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pl4wm_calico-system(39c8b25e-ea78-443a-855e-e43746267826)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c3cd98e8a3672cbe2fb2571824372c25ada5d01b5a53339e566f375b95adc8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:30:18.139728 kubelet[3199]: E1216 03:30:18.119472 3199 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6af179a9133f148de2727d6c545940ce68dae8cc20df036fdae327c460f6ce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.139728 kubelet[3199]: E1216 03:30:18.119533 3199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6af179a9133f148de2727d6c545940ce68dae8cc20df036fdae327c460f6ce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lgkx9" Dec 16 03:30:18.140834 containerd[1851]: time="2025-12-16T03:30:18.119223483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lgkx9,Uid:56b458df-a303-4716-b4d2-e294c54eff31,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6af179a9133f148de2727d6c545940ce68dae8cc20df036fdae327c460f6ce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:18.140936 kubelet[3199]: E1216 03:30:18.119878 3199 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6af179a9133f148de2727d6c545940ce68dae8cc20df036fdae327c460f6ce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lgkx9" Dec 16 03:30:18.140936 kubelet[3199]: E1216 03:30:18.120200 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lgkx9_kube-system(56b458df-a303-4716-b4d2-e294c54eff31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lgkx9_kube-system(56b458df-a303-4716-b4d2-e294c54eff31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6af179a9133f148de2727d6c545940ce68dae8cc20df036fdae327c460f6ce2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lgkx9" podUID="56b458df-a303-4716-b4d2-e294c54eff31" Dec 16 03:30:18.948474 systemd[1]: run-netns-cni\x2d57fb773c\x2d837f\x2d613a\x2df082\x2d38d608399bd4.mount: Deactivated successfully. Dec 16 03:30:18.948620 systemd[1]: run-netns-cni\x2d9edd4f93\x2d504f\x2df9e9\x2d1bbe\x2dd6ac475bf712.mount: Deactivated successfully. Dec 16 03:30:18.948704 systemd[1]: run-netns-cni\x2dc513949f\x2d6e29\x2d3b14\x2ddb4c\x2d879d0ecd03c7.mount: Deactivated successfully. Dec 16 03:30:25.629434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1166788585.mount: Deactivated successfully. Dec 16 03:30:25.762323 containerd[1851]: time="2025-12-16T03:30:25.762251837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:30:25.785990 containerd[1851]: time="2025-12-16T03:30:25.785254058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 03:30:25.918817 containerd[1851]: time="2025-12-16T03:30:25.918698471Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:30:25.931263 containerd[1851]: time="2025-12-16T03:30:25.930940478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:30:25.932613 containerd[1851]: time="2025-12-16T03:30:25.931819588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.125755138s" Dec 16 03:30:25.944633 containerd[1851]: time="2025-12-16T03:30:25.944568109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 03:30:25.987522 containerd[1851]: time="2025-12-16T03:30:25.987445384Z" level=info msg="CreateContainer within sandbox \"c604e1f5e0c1767ccd966137b408a7131942a0c0b98fb6a7a300b5ca75d56e75\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 03:30:26.092409 containerd[1851]: time="2025-12-16T03:30:26.092362114Z" level=info msg="Container 6fc6cc04a3a1385ba8fef0c8a00b58f89d8c9abc137a8147e4ec5f8429c36102: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:30:26.095947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119146228.mount: Deactivated successfully. Dec 16 03:30:26.183470 containerd[1851]: time="2025-12-16T03:30:26.183419757Z" level=info msg="CreateContainer within sandbox \"c604e1f5e0c1767ccd966137b408a7131942a0c0b98fb6a7a300b5ca75d56e75\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6fc6cc04a3a1385ba8fef0c8a00b58f89d8c9abc137a8147e4ec5f8429c36102\"" Dec 16 03:30:26.185455 containerd[1851]: time="2025-12-16T03:30:26.184415792Z" level=info msg="StartContainer for \"6fc6cc04a3a1385ba8fef0c8a00b58f89d8c9abc137a8147e4ec5f8429c36102\"" Dec 16 03:30:26.193001 containerd[1851]: time="2025-12-16T03:30:26.192524750Z" level=info msg="connecting to shim 6fc6cc04a3a1385ba8fef0c8a00b58f89d8c9abc137a8147e4ec5f8429c36102" address="unix:///run/containerd/s/0784cc1695a03b3635eac28cd30895e50552857ece8073cb596f396c2006152d" protocol=ttrpc version=3 Dec 16 03:30:26.388331 systemd[1]: Started cri-containerd-6fc6cc04a3a1385ba8fef0c8a00b58f89d8c9abc137a8147e4ec5f8429c36102.scope - libcontainer container 6fc6cc04a3a1385ba8fef0c8a00b58f89d8c9abc137a8147e4ec5f8429c36102. Dec 16 03:30:26.504971 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 03:30:26.505687 kernel: audit: type=1334 audit(1765855826.501:593): prog-id=182 op=LOAD Dec 16 03:30:26.501000 audit: BPF prog-id=182 op=LOAD Dec 16 03:30:26.501000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000112488 a2=98 a3=0 items=0 ppid=3972 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:26.513445 kernel: audit: type=1300 audit(1765855826.501:593): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000112488 a2=98 a3=0 items=0 ppid=3972 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:26.520784 kernel: audit: type=1327 audit(1765855826.501:593): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666633663633034613361313338356261386665663063386130306235 Dec 16 03:30:26.501000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666633663633034613361313338356261386665663063386130306235 Dec 16 03:30:26.504000 audit: BPF prog-id=183 op=LOAD Dec 16 03:30:26.523874 kernel: audit: type=1334 audit(1765855826.504:594): prog-id=183 op=LOAD Dec 16 03:30:26.523987 kernel: audit: type=1300 audit(1765855826.504:594): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000112218 a2=98 a3=0 items=0 ppid=3972 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:26.504000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000112218 a2=98 a3=0 items=0 ppid=3972 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:26.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666633663633034613361313338356261386665663063386130306235 Dec 16 03:30:26.531693 kernel: audit: type=1327 audit(1765855826.504:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666633663633034613361313338356261386665663063386130306235 Dec 16 03:30:26.505000 audit: BPF prog-id=183 op=UNLOAD Dec 16 03:30:26.541275 kernel: audit: type=1334 audit(1765855826.505:595): prog-id=183 op=UNLOAD Dec 16 03:30:26.505000 audit[4497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3972 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:26.553874 kernel: audit: type=1300 audit(1765855826.505:595): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3972 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:26.554033 kernel: audit: type=1327 audit(1765855826.505:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666633663633034613361313338356261386665663063386130306235 Dec 16 03:30:26.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666633663633034613361313338356261386665663063386130306235 Dec 16 03:30:26.555867 kernel: audit: type=1334 audit(1765855826.505:596): prog-id=182 op=UNLOAD Dec 16 03:30:26.505000 audit: BPF prog-id=182 op=UNLOAD Dec 16 03:30:26.505000 audit[4497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3972 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:26.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666633663633034613361313338356261386665663063386130306235 Dec 16 03:30:26.505000 audit: BPF prog-id=184 op=LOAD Dec 16 03:30:26.505000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001126e8 a2=98 a3=0 items=0 ppid=3972 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:26.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666633663633034613361313338356261386665663063386130306235 Dec 16 03:30:26.572653 containerd[1851]: time="2025-12-16T03:30:26.572560079Z" level=info msg="StartContainer for \"6fc6cc04a3a1385ba8fef0c8a00b58f89d8c9abc137a8147e4ec5f8429c36102\" returns successfully" Dec 16 03:30:28.418086 containerd[1851]: time="2025-12-16T03:30:28.418031860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f46b988d5-xtbpb,Uid:e32d83e7-8260-42ce-a13a-b8e2a7a65181,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:30:28.535360 containerd[1851]: time="2025-12-16T03:30:28.534294378Z" level=error msg="Failed to destroy network for sandbox \"908698e5d000634389a8cd21ab2857837b2ebfad3cd70af522a4e794e1a92a20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:28.538313 systemd[1]: run-netns-cni\x2dcf842af0\x2d4736\x2d8426\x2d1f1a\x2d316972bc055f.mount: Deactivated successfully. Dec 16 03:30:28.544432 containerd[1851]: time="2025-12-16T03:30:28.544360453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f46b988d5-xtbpb,Uid:e32d83e7-8260-42ce-a13a-b8e2a7a65181,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"908698e5d000634389a8cd21ab2857837b2ebfad3cd70af522a4e794e1a92a20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:28.563056 kubelet[3199]: E1216 03:30:28.551942 3199 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"908698e5d000634389a8cd21ab2857837b2ebfad3cd70af522a4e794e1a92a20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:28.563056 kubelet[3199]: E1216 03:30:28.563063 3199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"908698e5d000634389a8cd21ab2857837b2ebfad3cd70af522a4e794e1a92a20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" Dec 16 03:30:28.563876 kubelet[3199]: E1216 03:30:28.563086 3199 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"908698e5d000634389a8cd21ab2857837b2ebfad3cd70af522a4e794e1a92a20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" Dec 16 03:30:28.595622 kubelet[3199]: E1216 03:30:28.574246 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f46b988d5-xtbpb_calico-apiserver(e32d83e7-8260-42ce-a13a-b8e2a7a65181)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f46b988d5-xtbpb_calico-apiserver(e32d83e7-8260-42ce-a13a-b8e2a7a65181)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"908698e5d000634389a8cd21ab2857837b2ebfad3cd70af522a4e794e1a92a20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" podUID="e32d83e7-8260-42ce-a13a-b8e2a7a65181" Dec 16 03:30:29.328131 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 03:30:29.328271 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 03:30:29.389895 containerd[1851]: time="2025-12-16T03:30:29.389839819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f46b988d5-dv5td,Uid:0feff437-5720-498b-a3c1-fbae9f5f245c,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:30:29.393941 containerd[1851]: time="2025-12-16T03:30:29.392714700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lgkx9,Uid:56b458df-a303-4716-b4d2-e294c54eff31,Namespace:kube-system,Attempt:0,}" Dec 16 03:30:29.562018 containerd[1851]: time="2025-12-16T03:30:29.561962510Z" level=error msg="Failed to destroy network for sandbox \"5c45991c614dd88f7369d259f206eaab8f3c2a5defd3c926bacb7c5a8f44401b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:29.567320 containerd[1851]: time="2025-12-16T03:30:29.567266160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lgkx9,Uid:56b458df-a303-4716-b4d2-e294c54eff31,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c45991c614dd88f7369d259f206eaab8f3c2a5defd3c926bacb7c5a8f44401b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:29.569397 kubelet[3199]: E1216 03:30:29.567541 3199 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c45991c614dd88f7369d259f206eaab8f3c2a5defd3c926bacb7c5a8f44401b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:29.569397 kubelet[3199]: E1216 03:30:29.567604 3199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c45991c614dd88f7369d259f206eaab8f3c2a5defd3c926bacb7c5a8f44401b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lgkx9" Dec 16 03:30:29.569397 kubelet[3199]: E1216 03:30:29.567633 3199 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c45991c614dd88f7369d259f206eaab8f3c2a5defd3c926bacb7c5a8f44401b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lgkx9" Dec 16 03:30:29.575195 kubelet[3199]: E1216 03:30:29.567688 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lgkx9_kube-system(56b458df-a303-4716-b4d2-e294c54eff31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lgkx9_kube-system(56b458df-a303-4716-b4d2-e294c54eff31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c45991c614dd88f7369d259f206eaab8f3c2a5defd3c926bacb7c5a8f44401b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lgkx9" podUID="56b458df-a303-4716-b4d2-e294c54eff31" Dec 16 03:30:29.572137 systemd[1]: run-netns-cni\x2db96b3fc0\x2deb4d\x2d5ff2\x2d64cc\x2d60c74c4b57f6.mount: Deactivated successfully. Dec 16 03:30:29.591008 containerd[1851]: time="2025-12-16T03:30:29.590883984Z" level=error msg="Failed to destroy network for sandbox \"638bba9d504a089f0ddc0a88f2d8773096b6d22dc2d7c0c7a1fc9f468b78b871\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:29.594714 systemd[1]: run-netns-cni\x2da11605e0\x2d97ae\x2d715f\x2d6ab7\x2dd7af7b970117.mount: Deactivated successfully. Dec 16 03:30:29.598060 containerd[1851]: time="2025-12-16T03:30:29.597997118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f46b988d5-dv5td,Uid:0feff437-5720-498b-a3c1-fbae9f5f245c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"638bba9d504a089f0ddc0a88f2d8773096b6d22dc2d7c0c7a1fc9f468b78b871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:29.598342 kubelet[3199]: E1216 03:30:29.598305 3199 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"638bba9d504a089f0ddc0a88f2d8773096b6d22dc2d7c0c7a1fc9f468b78b871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:30:29.598420 kubelet[3199]: E1216 03:30:29.598381 3199 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"638bba9d504a089f0ddc0a88f2d8773096b6d22dc2d7c0c7a1fc9f468b78b871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" Dec 16 03:30:29.598485 kubelet[3199]: E1216 03:30:29.598426 3199 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"638bba9d504a089f0ddc0a88f2d8773096b6d22dc2d7c0c7a1fc9f468b78b871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" Dec 16 03:30:29.598860 kubelet[3199]: E1216 03:30:29.598504 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f46b988d5-dv5td_calico-apiserver(0feff437-5720-498b-a3c1-fbae9f5f245c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f46b988d5-dv5td_calico-apiserver(0feff437-5720-498b-a3c1-fbae9f5f245c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"638bba9d504a089f0ddc0a88f2d8773096b6d22dc2d7c0c7a1fc9f468b78b871\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" podUID="0feff437-5720-498b-a3c1-fbae9f5f245c" Dec 16 03:30:30.380027 containerd[1851]: time="2025-12-16T03:30:30.379952844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-95jjv,Uid:1598e409-2173-4fc3-8415-b507d5511623,Namespace:calico-system,Attempt:0,}" Dec 16 03:30:30.569205 kubelet[3199]: I1216 03:30:30.568481 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h99kj" podStartSLOduration=5.8752864129999995 podStartE2EDuration="31.562197124s" podCreationTimestamp="2025-12-16 03:29:59 +0000 UTC" firstStartedPulling="2025-12-16 03:30:00.258526656 +0000 UTC m=+25.087461980" lastFinishedPulling="2025-12-16 03:30:25.945437381 +0000 UTC m=+50.774372691" observedRunningTime="2025-12-16 03:30:26.928602023 +0000 UTC m=+51.757537362" watchObservedRunningTime="2025-12-16 03:30:30.562197124 +0000 UTC m=+55.391132454" Dec 16 03:30:30.631463 kubelet[3199]: I1216 03:30:30.631349 3199 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77fp8\" (UniqueName: \"kubernetes.io/projected/cdf7ec9a-0f7c-449b-9376-44e251f47cc8-kube-api-access-77fp8\") pod \"cdf7ec9a-0f7c-449b-9376-44e251f47cc8\" (UID: \"cdf7ec9a-0f7c-449b-9376-44e251f47cc8\") " Dec 16 03:30:30.634943 kubelet[3199]: I1216 03:30:30.632793 3199 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdf7ec9a-0f7c-449b-9376-44e251f47cc8-whisker-ca-bundle\") pod \"cdf7ec9a-0f7c-449b-9376-44e251f47cc8\" (UID: \"cdf7ec9a-0f7c-449b-9376-44e251f47cc8\") " Dec 16 03:30:30.634943 kubelet[3199]: I1216 03:30:30.632840 3199 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cdf7ec9a-0f7c-449b-9376-44e251f47cc8-whisker-backend-key-pair\") pod \"cdf7ec9a-0f7c-449b-9376-44e251f47cc8\" (UID: \"cdf7ec9a-0f7c-449b-9376-44e251f47cc8\") " Dec 16 03:30:30.641208 kubelet[3199]: I1216 03:30:30.639636 3199 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdf7ec9a-0f7c-449b-9376-44e251f47cc8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cdf7ec9a-0f7c-449b-9376-44e251f47cc8" (UID: "cdf7ec9a-0f7c-449b-9376-44e251f47cc8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 03:30:30.640308 systemd[1]: var-lib-kubelet-pods-cdf7ec9a\x2d0f7c\x2d449b\x2d9376\x2d44e251f47cc8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d77fp8.mount: Deactivated successfully. Dec 16 03:30:30.643860 kubelet[3199]: I1216 03:30:30.643811 3199 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdf7ec9a-0f7c-449b-9376-44e251f47cc8-kube-api-access-77fp8" (OuterVolumeSpecName: "kube-api-access-77fp8") pod "cdf7ec9a-0f7c-449b-9376-44e251f47cc8" (UID: "cdf7ec9a-0f7c-449b-9376-44e251f47cc8"). InnerVolumeSpecName "kube-api-access-77fp8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 03:30:30.650618 kubelet[3199]: I1216 03:30:30.650558 3199 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdf7ec9a-0f7c-449b-9376-44e251f47cc8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cdf7ec9a-0f7c-449b-9376-44e251f47cc8" (UID: "cdf7ec9a-0f7c-449b-9376-44e251f47cc8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 03:30:30.653141 systemd[1]: var-lib-kubelet-pods-cdf7ec9a\x2d0f7c\x2d449b\x2d9376\x2d44e251f47cc8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 03:30:30.734281 kubelet[3199]: I1216 03:30:30.734207 3199 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cdf7ec9a-0f7c-449b-9376-44e251f47cc8-whisker-backend-key-pair\") on node \"ip-172-31-30-117\" DevicePath \"\"" Dec 16 03:30:30.734281 kubelet[3199]: I1216 03:30:30.734246 3199 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-77fp8\" (UniqueName: \"kubernetes.io/projected/cdf7ec9a-0f7c-449b-9376-44e251f47cc8-kube-api-access-77fp8\") on node \"ip-172-31-30-117\" DevicePath \"\"" Dec 16 03:30:30.734281 kubelet[3199]: I1216 03:30:30.734256 3199 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdf7ec9a-0f7c-449b-9376-44e251f47cc8-whisker-ca-bundle\") on node \"ip-172-31-30-117\" DevicePath \"\"" Dec 16 03:30:30.898837 systemd[1]: Removed slice kubepods-besteffort-podcdf7ec9a_0f7c_449b_9376_44e251f47cc8.slice - libcontainer container kubepods-besteffort-podcdf7ec9a_0f7c_449b_9376_44e251f47cc8.slice. Dec 16 03:30:31.087302 systemd[1]: Created slice kubepods-besteffort-poda7663fa2_6c23_44c9_b4f7_07ab48404e5b.slice - libcontainer container kubepods-besteffort-poda7663fa2_6c23_44c9_b4f7_07ab48404e5b.slice. Dec 16 03:30:31.138296 kubelet[3199]: I1216 03:30:31.138264 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzn98\" (UniqueName: \"kubernetes.io/projected/a7663fa2-6c23-44c9-b4f7-07ab48404e5b-kube-api-access-hzn98\") pod \"whisker-79b9fd54f5-5npkd\" (UID: \"a7663fa2-6c23-44c9-b4f7-07ab48404e5b\") " pod="calico-system/whisker-79b9fd54f5-5npkd" Dec 16 03:30:31.140506 kubelet[3199]: I1216 03:30:31.140448 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7663fa2-6c23-44c9-b4f7-07ab48404e5b-whisker-ca-bundle\") pod \"whisker-79b9fd54f5-5npkd\" (UID: \"a7663fa2-6c23-44c9-b4f7-07ab48404e5b\") " pod="calico-system/whisker-79b9fd54f5-5npkd" Dec 16 03:30:31.140506 kubelet[3199]: I1216 03:30:31.140527 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a7663fa2-6c23-44c9-b4f7-07ab48404e5b-whisker-backend-key-pair\") pod \"whisker-79b9fd54f5-5npkd\" (UID: \"a7663fa2-6c23-44c9-b4f7-07ab48404e5b\") " pod="calico-system/whisker-79b9fd54f5-5npkd" Dec 16 03:30:31.380493 containerd[1851]: time="2025-12-16T03:30:31.380308536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b698bdfc8-x42pm,Uid:69c31889-528a-4f2f-822c-6d89b94291a9,Namespace:calico-system,Attempt:0,}" Dec 16 03:30:31.381457 kubelet[3199]: I1216 03:30:31.381417 3199 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdf7ec9a-0f7c-449b-9376-44e251f47cc8" path="/var/lib/kubelet/pods/cdf7ec9a-0f7c-449b-9376-44e251f47cc8/volumes" Dec 16 03:30:31.395853 containerd[1851]: time="2025-12-16T03:30:31.395789103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b9fd54f5-5npkd,Uid:a7663fa2-6c23-44c9-b4f7-07ab48404e5b,Namespace:calico-system,Attempt:0,}" Dec 16 03:30:32.380870 containerd[1851]: time="2025-12-16T03:30:32.380622818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bxg4k,Uid:c016e4c7-aae1-4c29-9a44-bb2d4d75b201,Namespace:kube-system,Attempt:0,}" Dec 16 03:30:32.885212 kernel: kauditd_printk_skb: 5 callbacks suppressed Dec 16 03:30:32.887017 kernel: audit: type=1334 audit(1765855832.882:598): prog-id=185 op=LOAD Dec 16 03:30:32.888638 kernel: audit: type=1300 audit(1765855832.882:598): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffddc92fd50 a2=98 a3=1fffffffffffffff items=0 ppid=4700 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:32.882000 audit: BPF prog-id=185 op=LOAD Dec 16 03:30:32.882000 audit[4825]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffddc92fd50 a2=98 a3=1fffffffffffffff items=0 ppid=4700 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:32.882000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:30:32.899194 kernel: audit: type=1327 audit(1765855832.882:598): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:30:32.884000 audit: BPF prog-id=185 op=UNLOAD Dec 16 03:30:32.901204 kernel: audit: type=1334 audit(1765855832.884:599): prog-id=185 op=UNLOAD Dec 16 03:30:32.884000 audit[4825]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffddc92fd20 a3=0 items=0 ppid=4700 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:32.910213 kernel: audit: type=1300 audit(1765855832.884:599): arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffddc92fd20 a3=0 items=0 ppid=4700 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:32.884000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:30:32.885000 audit: BPF prog-id=186 op=LOAD Dec 16 03:30:32.919696 kernel: audit: type=1327 audit(1765855832.884:599): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:30:32.919743 kernel: audit: type=1334 audit(1765855832.885:600): prog-id=186 op=LOAD Dec 16 03:30:32.920214 kernel: audit: type=1300 audit(1765855832.885:600): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffddc92fc30 a2=94 a3=3 items=0 ppid=4700 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:32.885000 audit[4825]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffddc92fc30 a2=94 a3=3 items=0 ppid=4700 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:32.885000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:30:32.927201 kernel: audit: type=1327 audit(1765855832.885:600): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:30:32.885000 audit: BPF prog-id=186 op=UNLOAD Dec 16 03:30:32.885000 audit[4825]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffddc92fc30 a2=94 a3=3 items=0 ppid=4700 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:32.885000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:30:32.885000 audit: BPF prog-id=187 op=LOAD Dec 16 03:30:32.885000 audit[4825]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffddc92fc70 a2=94 a3=7ffddc92fe50 items=0 ppid=4700 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:32.885000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:30:32.885000 audit: BPF prog-id=187 op=UNLOAD Dec 16 03:30:32.885000 audit[4825]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffddc92fc70 a2=94 a3=7ffddc92fe50 items=0 ppid=4700 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:32.885000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:30:32.897000 audit: BPF prog-id=188 op=LOAD Dec 16 03:30:32.897000 audit[4826]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef7254c80 a2=98 a3=3 items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:32.897000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:32.898000 audit: BPF prog-id=188 op=UNLOAD Dec 16 03:30:32.935380 kernel: audit: type=1334 audit(1765855832.885:601): prog-id=186 op=UNLOAD Dec 16 03:30:32.898000 audit[4826]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffef7254c50 a3=0 items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:32.898000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:32.899000 audit: BPF prog-id=189 op=LOAD Dec 16 03:30:32.899000 audit[4826]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef7254a70 a2=94 a3=54428f items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:32.899000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:32.899000 audit: BPF prog-id=189 op=UNLOAD Dec 16 03:30:32.899000 audit[4826]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffef7254a70 a2=94 a3=54428f items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:32.899000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:32.899000 audit: BPF prog-id=190 op=LOAD Dec 16 03:30:32.899000 audit[4826]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef7254aa0 a2=94 a3=2 items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:32.899000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:32.899000 audit: BPF prog-id=190 op=UNLOAD Dec 16 03:30:32.899000 audit[4826]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffef7254aa0 a2=0 a3=2 items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:32.899000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:33.083000 audit: BPF prog-id=191 op=LOAD Dec 16 03:30:33.083000 audit[4826]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef7254960 a2=94 a3=1 items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:33.083000 audit: BPF prog-id=191 op=UNLOAD Dec 16 03:30:33.083000 audit[4826]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffef7254960 a2=94 a3=1 items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:33.096000 audit: BPF prog-id=192 op=LOAD Dec 16 03:30:33.096000 audit[4826]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffef7254950 a2=94 a3=4 items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.096000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:33.096000 audit: BPF prog-id=192 op=UNLOAD Dec 16 03:30:33.096000 audit[4826]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffef7254950 a2=0 a3=4 items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.096000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:33.096000 audit: BPF prog-id=193 op=LOAD Dec 16 03:30:33.096000 audit[4826]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffef72547b0 a2=94 a3=5 items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.096000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:33.096000 audit: BPF prog-id=193 op=UNLOAD Dec 16 03:30:33.096000 audit[4826]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffef72547b0 a2=0 a3=5 items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.096000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:33.096000 audit: BPF prog-id=194 op=LOAD Dec 16 03:30:33.096000 audit[4826]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffef72549d0 a2=94 a3=6 items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.096000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:33.096000 audit: BPF prog-id=194 op=UNLOAD Dec 16 03:30:33.096000 audit[4826]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffef72549d0 a2=0 a3=6 items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.096000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:33.097000 audit: BPF prog-id=195 op=LOAD Dec 16 03:30:33.097000 audit[4826]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffef7254180 a2=94 a3=88 items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.097000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:33.097000 audit: BPF prog-id=196 op=LOAD Dec 16 03:30:33.097000 audit[4826]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffef7254000 a2=94 a3=2 items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.097000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:33.097000 audit: BPF prog-id=196 op=UNLOAD Dec 16 03:30:33.097000 audit[4826]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffef7254030 a2=0 a3=7ffef7254130 items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.097000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:33.098000 audit: BPF prog-id=195 op=UNLOAD Dec 16 03:30:33.098000 audit[4826]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=ec06d10 a2=0 a3=430e2f55aeb47cdb items=0 ppid=4700 pid=4826 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.098000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:30:33.189000 audit: BPF prog-id=197 op=LOAD Dec 16 03:30:33.189000 audit[4829]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde6ae24a0 a2=98 a3=1999999999999999 items=0 ppid=4700 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.189000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:30:33.189000 audit: BPF prog-id=197 op=UNLOAD Dec 16 03:30:33.189000 audit[4829]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffde6ae2470 a3=0 items=0 ppid=4700 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.189000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:30:33.189000 audit: BPF prog-id=198 op=LOAD Dec 16 03:30:33.189000 audit[4829]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde6ae2380 a2=94 a3=ffff items=0 ppid=4700 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.189000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:30:33.189000 audit: BPF prog-id=198 op=UNLOAD Dec 16 03:30:33.189000 audit[4829]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffde6ae2380 a2=94 a3=ffff items=0 ppid=4700 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.189000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:30:33.189000 audit: BPF prog-id=199 op=LOAD Dec 16 03:30:33.189000 audit[4829]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde6ae23c0 a2=94 a3=7ffde6ae25a0 items=0 ppid=4700 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.189000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:30:33.189000 audit: BPF prog-id=199 op=UNLOAD Dec 16 03:30:33.189000 audit[4829]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffde6ae23c0 a2=94 a3=7ffde6ae25a0 items=0 ppid=4700 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.189000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:30:33.321083 systemd-networkd[1463]: vxlan.calico: Link UP Dec 16 03:30:33.321097 systemd-networkd[1463]: vxlan.calico: Gained carrier Dec 16 03:30:33.323933 (udev-worker)[4846]: Network interface NamePolicy= disabled on kernel command line. Dec 16 03:30:33.379195 containerd[1851]: time="2025-12-16T03:30:33.379141715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pl4wm,Uid:39c8b25e-ea78-443a-855e-e43746267826,Namespace:calico-system,Attempt:0,}" Dec 16 03:30:33.670000 audit: BPF prog-id=200 op=LOAD Dec 16 03:30:33.670000 audit[4865]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd375716c0 a2=98 a3=0 items=0 ppid=4700 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.670000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:30:33.671000 audit: BPF prog-id=200 op=UNLOAD Dec 16 03:30:33.671000 audit[4865]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd37571690 a3=0 items=0 ppid=4700 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.671000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:30:33.673113 (udev-worker)[4841]: Network interface NamePolicy= disabled on kernel command line. Dec 16 03:30:33.675527 (udev-worker)[4862]: Network interface NamePolicy= disabled on kernel command line. Dec 16 03:30:33.700000 audit: BPF prog-id=201 op=LOAD Dec 16 03:30:33.700000 audit[4865]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd375714d0 a2=94 a3=54428f items=0 ppid=4700 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.700000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:30:33.700000 audit: BPF prog-id=201 op=UNLOAD Dec 16 03:30:33.700000 audit[4865]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd375714d0 a2=94 a3=54428f items=0 ppid=4700 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.700000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:30:33.700000 audit: BPF prog-id=202 op=LOAD Dec 16 03:30:33.700000 audit[4865]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd37571500 a2=94 a3=2 items=0 ppid=4700 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.700000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:30:33.700000 audit: BPF prog-id=202 op=UNLOAD Dec 16 03:30:33.700000 audit[4865]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd37571500 a2=0 a3=2 items=0 ppid=4700 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.700000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:30:33.700000 audit: BPF prog-id=203 op=LOAD Dec 16 03:30:33.700000 audit[4865]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd375712b0 a2=94 a3=4 items=0 ppid=4700 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.700000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:30:33.700000 audit: BPF prog-id=203 op=UNLOAD Dec 16 03:30:33.700000 audit[4865]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd375712b0 a2=94 a3=4 items=0 ppid=4700 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.700000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:30:33.700000 audit: BPF prog-id=204 op=LOAD Dec 16 03:30:33.700000 audit[4865]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd375713b0 a2=94 a3=7ffd37571530 items=0 ppid=4700 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.700000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:30:33.700000 audit: BPF prog-id=204 op=UNLOAD Dec 16 03:30:33.700000 audit[4865]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd375713b0 a2=0 a3=7ffd37571530 items=0 ppid=4700 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.700000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:30:33.701000 audit: BPF prog-id=205 op=LOAD Dec 16 03:30:33.701000 audit[4865]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd37570ae0 a2=94 a3=2 items=0 ppid=4700 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.701000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:30:33.701000 audit: BPF prog-id=205 op=UNLOAD Dec 16 03:30:33.701000 audit[4865]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd37570ae0 a2=0 a3=2 items=0 ppid=4700 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.701000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:30:33.701000 audit: BPF prog-id=206 op=LOAD Dec 16 03:30:33.701000 audit[4865]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd37570be0 a2=94 a3=30 items=0 ppid=4700 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.701000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:30:33.714000 audit: BPF prog-id=207 op=LOAD Dec 16 03:30:33.714000 audit[4871]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff2fafbb70 a2=98 a3=0 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.714000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.714000 audit: BPF prog-id=207 op=UNLOAD Dec 16 03:30:33.714000 audit[4871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff2fafbb40 a3=0 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.714000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.715000 audit: BPF prog-id=208 op=LOAD Dec 16 03:30:33.715000 audit[4871]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff2fafb960 a2=94 a3=54428f items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.715000 audit: BPF prog-id=208 op=UNLOAD Dec 16 03:30:33.715000 audit[4871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff2fafb960 a2=94 a3=54428f items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.715000 audit: BPF prog-id=209 op=LOAD Dec 16 03:30:33.715000 audit[4871]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff2fafb990 a2=94 a3=2 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.715000 audit: BPF prog-id=209 op=UNLOAD Dec 16 03:30:33.715000 audit[4871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff2fafb990 a2=0 a3=2 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.948000 audit: BPF prog-id=210 op=LOAD Dec 16 03:30:33.948000 audit[4871]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff2fafb850 a2=94 a3=1 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.948000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.948000 audit: BPF prog-id=210 op=UNLOAD Dec 16 03:30:33.948000 audit[4871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff2fafb850 a2=94 a3=1 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.948000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.965000 audit: BPF prog-id=211 op=LOAD Dec 16 03:30:33.965000 audit[4871]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff2fafb840 a2=94 a3=4 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.965000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.965000 audit: BPF prog-id=211 op=UNLOAD Dec 16 03:30:33.965000 audit[4871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff2fafb840 a2=0 a3=4 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.965000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.966000 audit: BPF prog-id=212 op=LOAD Dec 16 03:30:33.966000 audit[4871]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff2fafb6a0 a2=94 a3=5 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.966000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.966000 audit: BPF prog-id=212 op=UNLOAD Dec 16 03:30:33.966000 audit[4871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff2fafb6a0 a2=0 a3=5 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.966000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.966000 audit: BPF prog-id=213 op=LOAD Dec 16 03:30:33.966000 audit[4871]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff2fafb8c0 a2=94 a3=6 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.966000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.966000 audit: BPF prog-id=213 op=UNLOAD Dec 16 03:30:33.966000 audit[4871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff2fafb8c0 a2=0 a3=6 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.966000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.966000 audit: BPF prog-id=214 op=LOAD Dec 16 03:30:33.966000 audit[4871]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff2fafb070 a2=94 a3=88 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.966000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.967000 audit: BPF prog-id=215 op=LOAD Dec 16 03:30:33.967000 audit[4871]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff2fafaef0 a2=94 a3=2 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.967000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.967000 audit: BPF prog-id=215 op=UNLOAD Dec 16 03:30:33.967000 audit[4871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff2fafaf20 a2=0 a3=7fff2fafb020 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.967000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:33.967000 audit: BPF prog-id=214 op=UNLOAD Dec 16 03:30:33.967000 audit[4871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=33b32d10 a2=0 a3=5b6b288dd459f213 items=0 ppid=4700 pid=4871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:33.967000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:30:34.054000 audit: BPF prog-id=206 op=UNLOAD Dec 16 03:30:34.054000 audit[4700]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0003044c0 a2=0 a3=0 items=0 ppid=4689 pid=4700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:34.054000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 03:30:34.210753 (udev-worker)[4869]: Network interface NamePolicy= disabled on kernel command line. Dec 16 03:30:34.216349 systemd-networkd[1463]: cali3eb4588be1a: Link UP Dec 16 03:30:34.217442 systemd-networkd[1463]: cali3eb4588be1a: Gained carrier Dec 16 03:30:34.242111 containerd[1851]: 2025-12-16 03:30:32.472 [INFO][4772] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:30:34.242111 containerd[1851]: 2025-12-16 03:30:32.495 [INFO][4772] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--117-k8s-coredns--668d6bf9bc--bxg4k-eth0 coredns-668d6bf9bc- kube-system c016e4c7-aae1-4c29-9a44-bb2d4d75b201 827 0 2025-12-16 03:29:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-117 coredns-668d6bf9bc-bxg4k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3eb4588be1a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" Namespace="kube-system" Pod="coredns-668d6bf9bc-bxg4k" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--bxg4k-" Dec 16 03:30:34.242111 containerd[1851]: 2025-12-16 03:30:32.496 [INFO][4772] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" Namespace="kube-system" Pod="coredns-668d6bf9bc-bxg4k" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--bxg4k-eth0" Dec 16 03:30:34.242111 containerd[1851]: 2025-12-16 03:30:34.078 [INFO][4786] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" HandleID="k8s-pod-network.8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" Workload="ip--172--31--30--117-k8s-coredns--668d6bf9bc--bxg4k-eth0" Dec 16 03:30:34.242850 containerd[1851]: 2025-12-16 03:30:34.081 [INFO][4786] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" HandleID="k8s-pod-network.8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" Workload="ip--172--31--30--117-k8s-coredns--668d6bf9bc--bxg4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e380), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-117", "pod":"coredns-668d6bf9bc-bxg4k", "timestamp":"2025-12-16 03:30:34.078888994 +0000 UTC"}, Hostname:"ip-172-31-30-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:30:34.242850 containerd[1851]: 2025-12-16 03:30:34.081 [INFO][4786] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:30:34.242850 containerd[1851]: 2025-12-16 03:30:34.081 [INFO][4786] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:30:34.242850 containerd[1851]: 2025-12-16 03:30:34.082 [INFO][4786] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-117' Dec 16 03:30:34.242850 containerd[1851]: 2025-12-16 03:30:34.096 [INFO][4786] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" host="ip-172-31-30-117" Dec 16 03:30:34.242850 containerd[1851]: 2025-12-16 03:30:34.166 [INFO][4786] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-117" Dec 16 03:30:34.242850 containerd[1851]: 2025-12-16 03:30:34.174 [INFO][4786] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:34.242850 containerd[1851]: 2025-12-16 03:30:34.176 [INFO][4786] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:34.242850 containerd[1851]: 2025-12-16 03:30:34.179 [INFO][4786] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:34.243117 containerd[1851]: 2025-12-16 03:30:34.179 [INFO][4786] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" host="ip-172-31-30-117" Dec 16 03:30:34.243117 containerd[1851]: 2025-12-16 03:30:34.182 [INFO][4786] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522 Dec 16 03:30:34.243117 containerd[1851]: 2025-12-16 03:30:34.189 [INFO][4786] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" host="ip-172-31-30-117" Dec 16 03:30:34.243117 containerd[1851]: 2025-12-16 03:30:34.197 [INFO][4786] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.193/26] block=192.168.14.192/26 handle="k8s-pod-network.8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" host="ip-172-31-30-117" Dec 16 03:30:34.243117 containerd[1851]: 2025-12-16 03:30:34.197 [INFO][4786] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.193/26] handle="k8s-pod-network.8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" host="ip-172-31-30-117" Dec 16 03:30:34.243117 containerd[1851]: 2025-12-16 03:30:34.197 [INFO][4786] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:30:34.243117 containerd[1851]: 2025-12-16 03:30:34.197 [INFO][4786] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.193/26] IPv6=[] ContainerID="8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" HandleID="k8s-pod-network.8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" Workload="ip--172--31--30--117-k8s-coredns--668d6bf9bc--bxg4k-eth0" Dec 16 03:30:34.243333 containerd[1851]: 2025-12-16 03:30:34.203 [INFO][4772] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" Namespace="kube-system" Pod="coredns-668d6bf9bc-bxg4k" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--bxg4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-coredns--668d6bf9bc--bxg4k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c016e4c7-aae1-4c29-9a44-bb2d4d75b201", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 29, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"", Pod:"coredns-668d6bf9bc-bxg4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3eb4588be1a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:34.243333 containerd[1851]: 2025-12-16 03:30:34.203 [INFO][4772] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.193/32] ContainerID="8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" Namespace="kube-system" Pod="coredns-668d6bf9bc-bxg4k" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--bxg4k-eth0" Dec 16 03:30:34.243333 containerd[1851]: 2025-12-16 03:30:34.203 [INFO][4772] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3eb4588be1a ContainerID="8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" Namespace="kube-system" Pod="coredns-668d6bf9bc-bxg4k" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--bxg4k-eth0" Dec 16 03:30:34.243333 containerd[1851]: 2025-12-16 03:30:34.218 [INFO][4772] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" Namespace="kube-system" Pod="coredns-668d6bf9bc-bxg4k" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--bxg4k-eth0" Dec 16 03:30:34.243333 containerd[1851]: 2025-12-16 03:30:34.219 [INFO][4772] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" Namespace="kube-system" Pod="coredns-668d6bf9bc-bxg4k" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--bxg4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-coredns--668d6bf9bc--bxg4k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c016e4c7-aae1-4c29-9a44-bb2d4d75b201", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 29, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522", Pod:"coredns-668d6bf9bc-bxg4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3eb4588be1a", MAC:"96:da:c1:ff:5a:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:34.243333 containerd[1851]: 2025-12-16 03:30:34.236 [INFO][4772] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" Namespace="kube-system" Pod="coredns-668d6bf9bc-bxg4k" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--bxg4k-eth0" Dec 16 03:30:34.286000 audit[4901]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4901 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:30:34.286000 audit[4901]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffebbf6e510 a2=0 a3=7ffebbf6e4fc items=0 ppid=4700 pid=4901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:34.286000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:30:34.301000 audit[4902]: NETFILTER_CFG table=raw:122 family=2 entries=21 op=nft_register_chain pid=4902 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:30:34.301000 audit[4902]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc95f8ae10 a2=0 a3=7ffc95f8adfc items=0 ppid=4700 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:34.301000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:30:34.314631 systemd-networkd[1463]: caliae94256cd47: Link UP Dec 16 03:30:34.315051 systemd-networkd[1463]: caliae94256cd47: Gained carrier Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:31.434 [INFO][4650] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:31.452 [INFO][4650] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--117-k8s-calico--kube--controllers--6b698bdfc8--x42pm-eth0 calico-kube-controllers-6b698bdfc8- calico-system 69c31889-528a-4f2f-822c-6d89b94291a9 824 0 2025-12-16 03:29:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b698bdfc8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-117 calico-kube-controllers-6b698bdfc8-x42pm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliae94256cd47 [] [] }} ContainerID="242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" Namespace="calico-system" Pod="calico-kube-controllers-6b698bdfc8-x42pm" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--kube--controllers--6b698bdfc8--x42pm-" Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:31.452 [INFO][4650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" Namespace="calico-system" Pod="calico-kube-controllers-6b698bdfc8-x42pm" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--kube--controllers--6b698bdfc8--x42pm-eth0" Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.078 [INFO][4673] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" HandleID="k8s-pod-network.242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" Workload="ip--172--31--30--117-k8s-calico--kube--controllers--6b698bdfc8--x42pm-eth0" Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.081 [INFO][4673] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" HandleID="k8s-pod-network.242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" Workload="ip--172--31--30--117-k8s-calico--kube--controllers--6b698bdfc8--x42pm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324570), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-117", "pod":"calico-kube-controllers-6b698bdfc8-x42pm", "timestamp":"2025-12-16 03:30:34.078136166 +0000 UTC"}, Hostname:"ip-172-31-30-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.081 [INFO][4673] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.197 [INFO][4673] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.198 [INFO][4673] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-117' Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.214 [INFO][4673] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" host="ip-172-31-30-117" Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.267 [INFO][4673] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-117" Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.272 [INFO][4673] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.275 [INFO][4673] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.277 [INFO][4673] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.277 [INFO][4673] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" host="ip-172-31-30-117" Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.281 [INFO][4673] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8 Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.290 [INFO][4673] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" host="ip-172-31-30-117" Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.302 [INFO][4673] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.194/26] block=192.168.14.192/26 handle="k8s-pod-network.242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" host="ip-172-31-30-117" Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.302 [INFO][4673] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.194/26] handle="k8s-pod-network.242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" host="ip-172-31-30-117" Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.302 [INFO][4673] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:30:34.363678 containerd[1851]: 2025-12-16 03:30:34.302 [INFO][4673] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.194/26] IPv6=[] ContainerID="242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" HandleID="k8s-pod-network.242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" Workload="ip--172--31--30--117-k8s-calico--kube--controllers--6b698bdfc8--x42pm-eth0" Dec 16 03:30:34.366044 containerd[1851]: 2025-12-16 03:30:34.309 [INFO][4650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" Namespace="calico-system" Pod="calico-kube-controllers-6b698bdfc8-x42pm" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--kube--controllers--6b698bdfc8--x42pm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-calico--kube--controllers--6b698bdfc8--x42pm-eth0", GenerateName:"calico-kube-controllers-6b698bdfc8-", Namespace:"calico-system", SelfLink:"", UID:"69c31889-528a-4f2f-822c-6d89b94291a9", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b698bdfc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"", Pod:"calico-kube-controllers-6b698bdfc8-x42pm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae94256cd47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:34.366044 containerd[1851]: 2025-12-16 03:30:34.310 [INFO][4650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.194/32] ContainerID="242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" Namespace="calico-system" Pod="calico-kube-controllers-6b698bdfc8-x42pm" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--kube--controllers--6b698bdfc8--x42pm-eth0" Dec 16 03:30:34.366044 containerd[1851]: 2025-12-16 03:30:34.310 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae94256cd47 ContainerID="242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" Namespace="calico-system" Pod="calico-kube-controllers-6b698bdfc8-x42pm" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--kube--controllers--6b698bdfc8--x42pm-eth0" Dec 16 03:30:34.366044 containerd[1851]: 2025-12-16 03:30:34.314 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" Namespace="calico-system" Pod="calico-kube-controllers-6b698bdfc8-x42pm" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--kube--controllers--6b698bdfc8--x42pm-eth0" Dec 16 03:30:34.366044 containerd[1851]: 2025-12-16 03:30:34.315 [INFO][4650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" Namespace="calico-system" Pod="calico-kube-controllers-6b698bdfc8-x42pm" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--kube--controllers--6b698bdfc8--x42pm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-calico--kube--controllers--6b698bdfc8--x42pm-eth0", GenerateName:"calico-kube-controllers-6b698bdfc8-", Namespace:"calico-system", SelfLink:"", UID:"69c31889-528a-4f2f-822c-6d89b94291a9", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b698bdfc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8", Pod:"calico-kube-controllers-6b698bdfc8-x42pm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae94256cd47", MAC:"c2:a4:bb:13:e5:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:34.366044 containerd[1851]: 2025-12-16 03:30:34.350 [INFO][4650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" Namespace="calico-system" Pod="calico-kube-controllers-6b698bdfc8-x42pm" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--kube--controllers--6b698bdfc8--x42pm-eth0" Dec 16 03:30:34.371000 audit[4912]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4912 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:30:34.371000 audit[4912]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe5b8fa8b0 a2=0 a3=7ffe5b8fa89c items=0 ppid=4700 pid=4912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:34.371000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:30:34.375000 audit[4911]: NETFILTER_CFG table=filter:124 family=2 entries=39 op=nft_register_chain pid=4911 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:30:34.375000 audit[4911]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffe94df5600 a2=0 a3=7ffe94df55ec items=0 ppid=4700 pid=4911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:34.375000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:30:34.521252 systemd-networkd[1463]: cali18bb85df3b5: Link UP Dec 16 03:30:34.538711 systemd-networkd[1463]: cali18bb85df3b5: Gained carrier Dec 16 03:30:34.551000 audit[4924]: NETFILTER_CFG table=filter:125 family=2 entries=72 op=nft_register_chain pid=4924 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:30:34.551000 audit[4924]: SYSCALL arch=c000003e syscall=46 success=yes exit=41016 a0=3 a1=7ffc9d2bbf40 a2=0 a3=7ffc9d2bbf2c items=0 ppid=4700 pid=4924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:34.551000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:30.438 [INFO][4615] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:30.843 [INFO][4615] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--117-k8s-goldmane--666569f655--95jjv-eth0 goldmane-666569f655- calico-system 1598e409-2173-4fc3-8415-b507d5511623 826 0 2025-12-16 03:29:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-30-117 goldmane-666569f655-95jjv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali18bb85df3b5 [] [] }} ContainerID="794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" Namespace="calico-system" Pod="goldmane-666569f655-95jjv" WorkloadEndpoint="ip--172--31--30--117-k8s-goldmane--666569f655--95jjv-" Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:30.843 [INFO][4615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" Namespace="calico-system" Pod="goldmane-666569f655-95jjv" WorkloadEndpoint="ip--172--31--30--117-k8s-goldmane--666569f655--95jjv-eth0" Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.079 [INFO][4635] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" HandleID="k8s-pod-network.794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" Workload="ip--172--31--30--117-k8s-goldmane--666569f655--95jjv-eth0" Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.082 [INFO][4635] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" HandleID="k8s-pod-network.794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" Workload="ip--172--31--30--117-k8s-goldmane--666569f655--95jjv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103570), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-117", "pod":"goldmane-666569f655-95jjv", "timestamp":"2025-12-16 03:30:34.079514976 +0000 UTC"}, Hostname:"ip-172-31-30-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.082 [INFO][4635] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.303 [INFO][4635] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.303 [INFO][4635] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-117' Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.317 [INFO][4635] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" host="ip-172-31-30-117" Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.371 [INFO][4635] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-117" Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.403 [INFO][4635] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.406 [INFO][4635] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.413 [INFO][4635] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.413 [INFO][4635] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" host="ip-172-31-30-117" Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.428 [INFO][4635] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.437 [INFO][4635] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" host="ip-172-31-30-117" Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.475 [INFO][4635] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.195/26] block=192.168.14.192/26 handle="k8s-pod-network.794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" host="ip-172-31-30-117" Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.475 [INFO][4635] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.195/26] handle="k8s-pod-network.794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" host="ip-172-31-30-117" Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.475 [INFO][4635] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:30:34.648417 containerd[1851]: 2025-12-16 03:30:34.475 [INFO][4635] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.195/26] IPv6=[] ContainerID="794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" HandleID="k8s-pod-network.794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" Workload="ip--172--31--30--117-k8s-goldmane--666569f655--95jjv-eth0" Dec 16 03:30:34.651918 containerd[1851]: 2025-12-16 03:30:34.509 [INFO][4615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" Namespace="calico-system" Pod="goldmane-666569f655-95jjv" WorkloadEndpoint="ip--172--31--30--117-k8s-goldmane--666569f655--95jjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-goldmane--666569f655--95jjv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1598e409-2173-4fc3-8415-b507d5511623", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"", Pod:"goldmane-666569f655-95jjv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali18bb85df3b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:34.651918 containerd[1851]: 2025-12-16 03:30:34.510 [INFO][4615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.195/32] ContainerID="794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" Namespace="calico-system" Pod="goldmane-666569f655-95jjv" WorkloadEndpoint="ip--172--31--30--117-k8s-goldmane--666569f655--95jjv-eth0" Dec 16 03:30:34.651918 containerd[1851]: 2025-12-16 03:30:34.510 [INFO][4615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18bb85df3b5 ContainerID="794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" Namespace="calico-system" Pod="goldmane-666569f655-95jjv" WorkloadEndpoint="ip--172--31--30--117-k8s-goldmane--666569f655--95jjv-eth0" Dec 16 03:30:34.651918 containerd[1851]: 2025-12-16 03:30:34.536 [INFO][4615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" Namespace="calico-system" Pod="goldmane-666569f655-95jjv" WorkloadEndpoint="ip--172--31--30--117-k8s-goldmane--666569f655--95jjv-eth0" Dec 16 03:30:34.651918 containerd[1851]: 2025-12-16 03:30:34.540 [INFO][4615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" Namespace="calico-system" Pod="goldmane-666569f655-95jjv" WorkloadEndpoint="ip--172--31--30--117-k8s-goldmane--666569f655--95jjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-goldmane--666569f655--95jjv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1598e409-2173-4fc3-8415-b507d5511623", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 29, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a", Pod:"goldmane-666569f655-95jjv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.14.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali18bb85df3b5", MAC:"2e:83:40:17:85:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:34.651918 containerd[1851]: 2025-12-16 03:30:34.605 [INFO][4615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" Namespace="calico-system" Pod="goldmane-666569f655-95jjv" WorkloadEndpoint="ip--172--31--30--117-k8s-goldmane--666569f655--95jjv-eth0" Dec 16 03:30:34.688444 systemd-networkd[1463]: vxlan.calico: Gained IPv6LL Dec 16 03:30:34.717000 audit[4945]: NETFILTER_CFG table=filter:126 family=2 entries=48 op=nft_register_chain pid=4945 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:30:34.717000 audit[4945]: SYSCALL arch=c000003e syscall=46 success=yes exit=26368 a0=3 a1=7fff2424c350 a2=0 a3=7fff2424c33c items=0 ppid=4700 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:34.717000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:30:34.757994 containerd[1851]: time="2025-12-16T03:30:34.757573717Z" level=info msg="connecting to shim 8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522" address="unix:///run/containerd/s/0fb05b8b520b4799903ab17fc600f653722afcd15afc197df4a6b9a0aa3b951b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:30:34.765016 containerd[1851]: time="2025-12-16T03:30:34.764971073Z" level=info msg="connecting to shim 242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8" address="unix:///run/containerd/s/87b5f9681002aa03687aa32f20f64e2e04b63ebd3dd4eed6c407b222efd0cc13" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:30:34.800731 systemd-networkd[1463]: cali8972579303f: Link UP Dec 16 03:30:34.812070 systemd-networkd[1463]: cali8972579303f: Gained carrier Dec 16 03:30:34.816877 containerd[1851]: time="2025-12-16T03:30:34.816826023Z" level=info msg="connecting to shim 794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a" address="unix:///run/containerd/s/721e43a0e4e9c071e99263dc64eeee4e76b66b246ab752b8f0ea0fa6d7befd44" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:31.446 [INFO][4659] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:31.464 [INFO][4659] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--117-k8s-whisker--79b9fd54f5--5npkd-eth0 whisker-79b9fd54f5- calico-system a7663fa2-6c23-44c9-b4f7-07ab48404e5b 902 0 2025-12-16 03:30:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79b9fd54f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-30-117 whisker-79b9fd54f5-5npkd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8972579303f [] [] }} ContainerID="7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" Namespace="calico-system" Pod="whisker-79b9fd54f5-5npkd" WorkloadEndpoint="ip--172--31--30--117-k8s-whisker--79b9fd54f5--5npkd-" Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:31.465 [INFO][4659] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" Namespace="calico-system" Pod="whisker-79b9fd54f5-5npkd" WorkloadEndpoint="ip--172--31--30--117-k8s-whisker--79b9fd54f5--5npkd-eth0" Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.079 [INFO][4677] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" HandleID="k8s-pod-network.7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" Workload="ip--172--31--30--117-k8s-whisker--79b9fd54f5--5npkd-eth0" Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.080 [INFO][4677] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" HandleID="k8s-pod-network.7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" Workload="ip--172--31--30--117-k8s-whisker--79b9fd54f5--5npkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e470), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-117", "pod":"whisker-79b9fd54f5-5npkd", "timestamp":"2025-12-16 03:30:34.079078648 +0000 UTC"}, Hostname:"ip-172-31-30-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.081 [INFO][4677] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.478 [INFO][4677] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.478 [INFO][4677] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-117' Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.600 [INFO][4677] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" host="ip-172-31-30-117" Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.634 [INFO][4677] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-117" Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.673 [INFO][4677] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.696 [INFO][4677] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.705 [INFO][4677] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.705 [INFO][4677] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" host="ip-172-31-30-117" Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.707 [INFO][4677] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775 Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.716 [INFO][4677] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" host="ip-172-31-30-117" Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.740 [INFO][4677] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.196/26] block=192.168.14.192/26 handle="k8s-pod-network.7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" host="ip-172-31-30-117" Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.741 [INFO][4677] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.196/26] handle="k8s-pod-network.7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" host="ip-172-31-30-117" Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.741 [INFO][4677] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:30:34.855003 containerd[1851]: 2025-12-16 03:30:34.741 [INFO][4677] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.196/26] IPv6=[] ContainerID="7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" HandleID="k8s-pod-network.7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" Workload="ip--172--31--30--117-k8s-whisker--79b9fd54f5--5npkd-eth0" Dec 16 03:30:34.856143 containerd[1851]: 2025-12-16 03:30:34.757 [INFO][4659] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" Namespace="calico-system" Pod="whisker-79b9fd54f5-5npkd" WorkloadEndpoint="ip--172--31--30--117-k8s-whisker--79b9fd54f5--5npkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-whisker--79b9fd54f5--5npkd-eth0", GenerateName:"whisker-79b9fd54f5-", Namespace:"calico-system", SelfLink:"", UID:"a7663fa2-6c23-44c9-b4f7-07ab48404e5b", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79b9fd54f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"", Pod:"whisker-79b9fd54f5-5npkd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8972579303f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:34.856143 containerd[1851]: 2025-12-16 03:30:34.760 [INFO][4659] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.196/32] ContainerID="7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" Namespace="calico-system" Pod="whisker-79b9fd54f5-5npkd" WorkloadEndpoint="ip--172--31--30--117-k8s-whisker--79b9fd54f5--5npkd-eth0" Dec 16 03:30:34.856143 containerd[1851]: 2025-12-16 03:30:34.760 [INFO][4659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8972579303f ContainerID="7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" Namespace="calico-system" Pod="whisker-79b9fd54f5-5npkd" WorkloadEndpoint="ip--172--31--30--117-k8s-whisker--79b9fd54f5--5npkd-eth0" Dec 16 03:30:34.856143 containerd[1851]: 2025-12-16 03:30:34.813 [INFO][4659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" Namespace="calico-system" Pod="whisker-79b9fd54f5-5npkd" WorkloadEndpoint="ip--172--31--30--117-k8s-whisker--79b9fd54f5--5npkd-eth0" Dec 16 03:30:34.856143 containerd[1851]: 2025-12-16 03:30:34.815 [INFO][4659] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" Namespace="calico-system" Pod="whisker-79b9fd54f5-5npkd" WorkloadEndpoint="ip--172--31--30--117-k8s-whisker--79b9fd54f5--5npkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-whisker--79b9fd54f5--5npkd-eth0", GenerateName:"whisker-79b9fd54f5-", Namespace:"calico-system", SelfLink:"", UID:"a7663fa2-6c23-44c9-b4f7-07ab48404e5b", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 30, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79b9fd54f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775", Pod:"whisker-79b9fd54f5-5npkd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.14.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8972579303f", MAC:"66:20:08:b9:97:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:34.856143 containerd[1851]: 2025-12-16 03:30:34.837 [INFO][4659] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" Namespace="calico-system" Pod="whisker-79b9fd54f5-5npkd" WorkloadEndpoint="ip--172--31--30--117-k8s-whisker--79b9fd54f5--5npkd-eth0" Dec 16 03:30:34.868539 systemd[1]: Started cri-containerd-8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522.scope - libcontainer container 8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522. Dec 16 03:30:34.905000 audit[5037]: NETFILTER_CFG table=filter:127 family=2 entries=65 op=nft_register_chain pid=5037 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:30:34.905000 audit[5037]: SYSCALL arch=c000003e syscall=46 success=yes exit=36448 a0=3 a1=7fffb2aad2b0 a2=0 a3=7fffb2aad29c items=0 ppid=4700 pid=5037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:34.905000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:30:34.915000 audit: BPF prog-id=216 op=LOAD Dec 16 03:30:34.917000 audit: BPF prog-id=217 op=LOAD Dec 16 03:30:34.917000 audit[4983]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4956 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:34.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866653066303939643332646561373434316635626537303132623337 Dec 16 03:30:34.920000 audit: BPF prog-id=217 op=UNLOAD Dec 16 03:30:34.920000 audit[4983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4956 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:34.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866653066303939643332646561373434316635626537303132623337 Dec 16 03:30:34.922000 audit: BPF prog-id=218 op=LOAD Dec 16 03:30:34.922000 audit[4983]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4956 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:34.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866653066303939643332646561373434316635626537303132623337 Dec 16 03:30:34.928000 audit: BPF prog-id=219 op=LOAD Dec 16 03:30:34.928000 audit[4983]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4956 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:34.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866653066303939643332646561373434316635626537303132623337 Dec 16 03:30:34.928000 audit: BPF prog-id=219 op=UNLOAD Dec 16 03:30:34.928000 audit[4983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4956 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:34.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866653066303939643332646561373434316635626537303132623337 Dec 16 03:30:34.933000 audit: BPF prog-id=218 op=UNLOAD Dec 16 03:30:34.933000 audit[4983]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4956 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:34.933000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866653066303939643332646561373434316635626537303132623337 Dec 16 03:30:34.937000 audit: BPF prog-id=220 op=LOAD Dec 16 03:30:34.937000 audit[4983]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4956 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:34.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866653066303939643332646561373434316635626537303132623337 Dec 16 03:30:34.964738 systemd[1]: Started cri-containerd-794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a.scope - libcontainer container 794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a. Dec 16 03:30:35.010741 containerd[1851]: time="2025-12-16T03:30:35.009242038Z" level=info msg="connecting to shim 7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775" address="unix:///run/containerd/s/7c03de355a5a3f1993b14986c18d179ce9f0dbe59b59b3e584d8b6ceaf1fe5f8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:30:35.028830 systemd[1]: Started cri-containerd-242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8.scope - libcontainer container 242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8. Dec 16 03:30:35.092000 audit: BPF prog-id=221 op=LOAD Dec 16 03:30:35.097000 audit: BPF prog-id=222 op=LOAD Dec 16 03:30:35.097000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4984 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343734396130363534333533623431313331626261393931333630 Dec 16 03:30:35.098000 audit: BPF prog-id=222 op=UNLOAD Dec 16 03:30:35.098000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4984 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343734396130363534333533623431313331626261393931333630 Dec 16 03:30:35.099000 audit: BPF prog-id=223 op=LOAD Dec 16 03:30:35.099000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4984 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343734396130363534333533623431313331626261393931333630 Dec 16 03:30:35.100000 audit: BPF prog-id=224 op=LOAD Dec 16 03:30:35.100000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4984 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343734396130363534333533623431313331626261393931333630 Dec 16 03:30:35.101000 audit: BPF prog-id=224 op=UNLOAD Dec 16 03:30:35.101000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4984 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343734396130363534333533623431313331626261393931333630 Dec 16 03:30:35.101000 audit: BPF prog-id=223 op=UNLOAD Dec 16 03:30:35.101000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4984 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343734396130363534333533623431313331626261393931333630 Dec 16 03:30:35.102000 audit: BPF prog-id=225 op=LOAD Dec 16 03:30:35.102000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4984 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739343734396130363534333533623431313331626261393931333630 Dec 16 03:30:35.148137 systemd[1]: Started cri-containerd-7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775.scope - libcontainer container 7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775. Dec 16 03:30:35.155678 containerd[1851]: time="2025-12-16T03:30:35.155581338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bxg4k,Uid:c016e4c7-aae1-4c29-9a44-bb2d4d75b201,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522\"" Dec 16 03:30:35.160000 audit: BPF prog-id=226 op=LOAD Dec 16 03:30:35.161000 audit: BPF prog-id=227 op=LOAD Dec 16 03:30:35.161000 audit[5007]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00015a238 a2=98 a3=0 items=0 ppid=4959 pid=5007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234326465396562336663623664643735363339313734396632373436 Dec 16 03:30:35.161000 audit: BPF prog-id=227 op=UNLOAD Dec 16 03:30:35.161000 audit[5007]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4959 pid=5007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234326465396562336663623664643735363339313734396632373436 Dec 16 03:30:35.162000 audit: BPF prog-id=228 op=LOAD Dec 16 03:30:35.162000 audit[5007]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00015a488 a2=98 a3=0 items=0 ppid=4959 pid=5007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234326465396562336663623664643735363339313734396632373436 Dec 16 03:30:35.162000 audit: BPF prog-id=229 op=LOAD Dec 16 03:30:35.162000 audit[5007]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00015a218 a2=98 a3=0 items=0 ppid=4959 pid=5007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234326465396562336663623664643735363339313734396632373436 Dec 16 03:30:35.163000 audit: BPF prog-id=229 op=UNLOAD Dec 16 03:30:35.163000 audit[5007]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4959 pid=5007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234326465396562336663623664643735363339313734396632373436 Dec 16 03:30:35.164000 audit: BPF prog-id=228 op=UNLOAD Dec 16 03:30:35.164000 audit[5007]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4959 pid=5007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234326465396562336663623664643735363339313734396632373436 Dec 16 03:30:35.166000 audit: BPF prog-id=230 op=LOAD Dec 16 03:30:35.166000 audit[5007]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00015a6e8 a2=98 a3=0 items=0 ppid=4959 pid=5007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234326465396562336663623664643735363339313734396632373436 Dec 16 03:30:35.215483 containerd[1851]: time="2025-12-16T03:30:35.215439130Z" level=info msg="CreateContainer within sandbox \"8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:30:35.221000 audit: BPF prog-id=231 op=LOAD Dec 16 03:30:35.226000 audit: BPF prog-id=232 op=LOAD Dec 16 03:30:35.226000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5072 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762326261393161313730636230663865356561366562633239383336 Dec 16 03:30:35.228000 audit: BPF prog-id=232 op=UNLOAD Dec 16 03:30:35.228000 audit[5098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5072 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762326261393161313730636230663865356561366562633239383336 Dec 16 03:30:35.228000 audit: BPF prog-id=233 op=LOAD Dec 16 03:30:35.228000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5072 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762326261393161313730636230663865356561366562633239383336 Dec 16 03:30:35.228000 audit: BPF prog-id=234 op=LOAD Dec 16 03:30:35.228000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5072 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762326261393161313730636230663865356561366562633239383336 Dec 16 03:30:35.228000 audit: BPF prog-id=234 op=UNLOAD Dec 16 03:30:35.228000 audit[5098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5072 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762326261393161313730636230663865356561366562633239383336 Dec 16 03:30:35.228000 audit: BPF prog-id=233 op=UNLOAD Dec 16 03:30:35.228000 audit[5098]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5072 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762326261393161313730636230663865356561366562633239383336 Dec 16 03:30:35.228000 audit: BPF prog-id=235 op=LOAD Dec 16 03:30:35.228000 audit[5098]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5072 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.228000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762326261393161313730636230663865356561366562633239383336 Dec 16 03:30:35.265068 systemd-networkd[1463]: cali3eb4588be1a: Gained IPv6LL Dec 16 03:30:35.329277 containerd[1851]: time="2025-12-16T03:30:35.329134166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-95jjv,Uid:1598e409-2173-4fc3-8415-b507d5511623,Namespace:calico-system,Attempt:0,} returns sandbox id \"794749a0654353b41131bba991360fd8d5c6e1c7409001922edb3d6b6004243a\"" Dec 16 03:30:35.365530 containerd[1851]: time="2025-12-16T03:30:35.363073662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b698bdfc8-x42pm,Uid:69c31889-528a-4f2f-822c-6d89b94291a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"242de9eb3fcb6dd756391749f274640b49d70dedf98f01880a1b66650da195e8\"" Dec 16 03:30:35.420418 containerd[1851]: time="2025-12-16T03:30:35.420353536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b9fd54f5-5npkd,Uid:a7663fa2-6c23-44c9-b4f7-07ab48404e5b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b2ba91a170cb0f8e5ea6ebc29836b491a385ef95379bc3bc5c9f9c6bcc87775\"" Dec 16 03:30:35.428559 systemd-networkd[1463]: cali2ba8603d83b: Link UP Dec 16 03:30:35.430979 systemd-networkd[1463]: cali2ba8603d83b: Gained carrier Dec 16 03:30:35.454204 containerd[1851]: time="2025-12-16T03:30:35.452858585Z" level=info msg="Container 383d98b3d5d2ee906fdbd96dc1fb5d511b98fbb235f3c7ec91a1c64bf0c88ba2: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:30:35.457840 systemd-networkd[1463]: caliae94256cd47: Gained IPv6LL Dec 16 03:30:35.486993 containerd[1851]: time="2025-12-16T03:30:35.486948186Z" level=info msg="CreateContainer within sandbox \"8fe0f099d32dea7441f5be7012b37f0e5d51014a43bbd2e1f48f6a70af332522\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"383d98b3d5d2ee906fdbd96dc1fb5d511b98fbb235f3c7ec91a1c64bf0c88ba2\"" Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.099 [INFO][5043] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--117-k8s-csi--node--driver--pl4wm-eth0 csi-node-driver- calico-system 39c8b25e-ea78-443a-855e-e43746267826 708 0 2025-12-16 03:29:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-30-117 csi-node-driver-pl4wm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2ba8603d83b [] [] }} ContainerID="fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" Namespace="calico-system" Pod="csi-node-driver-pl4wm" WorkloadEndpoint="ip--172--31--30--117-k8s-csi--node--driver--pl4wm-" Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.099 [INFO][5043] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" Namespace="calico-system" Pod="csi-node-driver-pl4wm" WorkloadEndpoint="ip--172--31--30--117-k8s-csi--node--driver--pl4wm-eth0" Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.257 [INFO][5111] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" HandleID="k8s-pod-network.fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" Workload="ip--172--31--30--117-k8s-csi--node--driver--pl4wm-eth0" Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.257 [INFO][5111] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" HandleID="k8s-pod-network.fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" Workload="ip--172--31--30--117-k8s-csi--node--driver--pl4wm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ec80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-117", "pod":"csi-node-driver-pl4wm", "timestamp":"2025-12-16 03:30:35.25708397 +0000 UTC"}, Hostname:"ip-172-31-30-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.258 [INFO][5111] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.258 [INFO][5111] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.258 [INFO][5111] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-117' Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.287 [INFO][5111] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" host="ip-172-31-30-117" Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.310 [INFO][5111] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-117" Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.328 [INFO][5111] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.338 [INFO][5111] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.356 [INFO][5111] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.356 [INFO][5111] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" host="ip-172-31-30-117" Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.363 [INFO][5111] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7 Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.387 [INFO][5111] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" host="ip-172-31-30-117" Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.408 [INFO][5111] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.197/26] block=192.168.14.192/26 handle="k8s-pod-network.fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" host="ip-172-31-30-117" Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.408 [INFO][5111] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.197/26] handle="k8s-pod-network.fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" host="ip-172-31-30-117" Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.408 [INFO][5111] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:30:35.495690 containerd[1851]: 2025-12-16 03:30:35.408 [INFO][5111] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.197/26] IPv6=[] ContainerID="fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" HandleID="k8s-pod-network.fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" Workload="ip--172--31--30--117-k8s-csi--node--driver--pl4wm-eth0" Dec 16 03:30:35.496630 containerd[1851]: 2025-12-16 03:30:35.422 [INFO][5043] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" Namespace="calico-system" Pod="csi-node-driver-pl4wm" WorkloadEndpoint="ip--172--31--30--117-k8s-csi--node--driver--pl4wm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-csi--node--driver--pl4wm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39c8b25e-ea78-443a-855e-e43746267826", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"", Pod:"csi-node-driver-pl4wm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ba8603d83b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:35.496630 containerd[1851]: 2025-12-16 03:30:35.422 [INFO][5043] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.197/32] ContainerID="fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" Namespace="calico-system" Pod="csi-node-driver-pl4wm" WorkloadEndpoint="ip--172--31--30--117-k8s-csi--node--driver--pl4wm-eth0" Dec 16 03:30:35.496630 containerd[1851]: 2025-12-16 03:30:35.422 [INFO][5043] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ba8603d83b ContainerID="fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" Namespace="calico-system" Pod="csi-node-driver-pl4wm" WorkloadEndpoint="ip--172--31--30--117-k8s-csi--node--driver--pl4wm-eth0" Dec 16 03:30:35.496630 containerd[1851]: 2025-12-16 03:30:35.429 [INFO][5043] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" Namespace="calico-system" Pod="csi-node-driver-pl4wm" WorkloadEndpoint="ip--172--31--30--117-k8s-csi--node--driver--pl4wm-eth0" Dec 16 03:30:35.496630 containerd[1851]: 2025-12-16 03:30:35.433 [INFO][5043] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" Namespace="calico-system" Pod="csi-node-driver-pl4wm" WorkloadEndpoint="ip--172--31--30--117-k8s-csi--node--driver--pl4wm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-csi--node--driver--pl4wm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39c8b25e-ea78-443a-855e-e43746267826", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7", Pod:"csi-node-driver-pl4wm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ba8603d83b", MAC:"ce:f1:21:55:e1:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:35.496630 containerd[1851]: 2025-12-16 03:30:35.483 [INFO][5043] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" Namespace="calico-system" Pod="csi-node-driver-pl4wm" WorkloadEndpoint="ip--172--31--30--117-k8s-csi--node--driver--pl4wm-eth0" Dec 16 03:30:35.520000 audit[5160]: NETFILTER_CFG table=filter:128 family=2 entries=48 op=nft_register_chain pid=5160 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:30:35.520000 audit[5160]: SYSCALL arch=c000003e syscall=46 success=yes exit=23140 a0=3 a1=7ffed08708c0 a2=0 a3=7ffed08708ac items=0 ppid=4700 pid=5160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.520000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:30:35.558388 containerd[1851]: time="2025-12-16T03:30:35.558124152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:30:35.565077 containerd[1851]: time="2025-12-16T03:30:35.565030759Z" level=info msg="connecting to shim fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7" address="unix:///run/containerd/s/2de1e8a561f15d2c8e4de11a483d443b91d80b7ca558d006aa0ef01c1a563d53" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:30:35.572835 containerd[1851]: time="2025-12-16T03:30:35.572796830Z" level=info msg="StartContainer for \"383d98b3d5d2ee906fdbd96dc1fb5d511b98fbb235f3c7ec91a1c64bf0c88ba2\"" Dec 16 03:30:35.577971 containerd[1851]: time="2025-12-16T03:30:35.577924586Z" level=info msg="connecting to shim 383d98b3d5d2ee906fdbd96dc1fb5d511b98fbb235f3c7ec91a1c64bf0c88ba2" address="unix:///run/containerd/s/0fb05b8b520b4799903ab17fc600f653722afcd15afc197df4a6b9a0aa3b951b" protocol=ttrpc version=3 Dec 16 03:30:35.652236 systemd[1]: Started cri-containerd-fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7.scope - libcontainer container fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7. Dec 16 03:30:35.666572 systemd[1]: Started cri-containerd-383d98b3d5d2ee906fdbd96dc1fb5d511b98fbb235f3c7ec91a1c64bf0c88ba2.scope - libcontainer container 383d98b3d5d2ee906fdbd96dc1fb5d511b98fbb235f3c7ec91a1c64bf0c88ba2. Dec 16 03:30:35.709000 audit: BPF prog-id=236 op=LOAD Dec 16 03:30:35.711000 audit: BPF prog-id=237 op=LOAD Dec 16 03:30:35.711000 audit[5179]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4956 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338336439386233643564326565393036666462643936646331666235 Dec 16 03:30:35.711000 audit: BPF prog-id=237 op=UNLOAD Dec 16 03:30:35.711000 audit[5179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4956 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338336439386233643564326565393036666462643936646331666235 Dec 16 03:30:35.712000 audit: BPF prog-id=238 op=LOAD Dec 16 03:30:35.712000 audit: BPF prog-id=239 op=LOAD Dec 16 03:30:35.712000 audit[5179]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4956 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338336439386233643564326565393036666462643936646331666235 Dec 16 03:30:35.712000 audit: BPF prog-id=240 op=LOAD Dec 16 03:30:35.712000 audit[5179]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4956 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338336439386233643564326565393036666462643936646331666235 Dec 16 03:30:35.712000 audit: BPF prog-id=240 op=UNLOAD Dec 16 03:30:35.712000 audit[5179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4956 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338336439386233643564326565393036666462643936646331666235 Dec 16 03:30:35.712000 audit: BPF prog-id=239 op=UNLOAD Dec 16 03:30:35.712000 audit[5179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4956 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338336439386233643564326565393036666462643936646331666235 Dec 16 03:30:35.713000 audit: BPF prog-id=241 op=LOAD Dec 16 03:30:35.713000 audit[5182]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5170 pid=5182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665333434323032343562633831356561336332363238383636633933 Dec 16 03:30:35.713000 audit: BPF prog-id=241 op=UNLOAD Dec 16 03:30:35.713000 audit[5182]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5170 pid=5182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665333434323032343562633831356561336332363238383636633933 Dec 16 03:30:35.712000 audit: BPF prog-id=242 op=LOAD Dec 16 03:30:35.712000 audit[5179]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4956 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.713000 audit: BPF prog-id=243 op=LOAD Dec 16 03:30:35.713000 audit[5182]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5170 pid=5182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665333434323032343562633831356561336332363238383636633933 Dec 16 03:30:35.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338336439386233643564326565393036666462643936646331666235 Dec 16 03:30:35.713000 audit: BPF prog-id=244 op=LOAD Dec 16 03:30:35.713000 audit[5182]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5170 pid=5182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665333434323032343562633831356561336332363238383636633933 Dec 16 03:30:35.713000 audit: BPF prog-id=244 op=UNLOAD Dec 16 03:30:35.713000 audit[5182]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5170 pid=5182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665333434323032343562633831356561336332363238383636633933 Dec 16 03:30:35.713000 audit: BPF prog-id=243 op=UNLOAD Dec 16 03:30:35.713000 audit[5182]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5170 pid=5182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665333434323032343562633831356561336332363238383636633933 Dec 16 03:30:35.713000 audit: BPF prog-id=245 op=LOAD Dec 16 03:30:35.713000 audit[5182]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5170 pid=5182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:35.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665333434323032343562633831356561336332363238383636633933 Dec 16 03:30:35.769382 containerd[1851]: time="2025-12-16T03:30:35.769243854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pl4wm,Uid:39c8b25e-ea78-443a-855e-e43746267826,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe34420245bc815ea3c2628866c932e30327f9e97f227c215086150fa27b81d7\"" Dec 16 03:30:35.769382 containerd[1851]: time="2025-12-16T03:30:35.769330003Z" level=info msg="StartContainer for \"383d98b3d5d2ee906fdbd96dc1fb5d511b98fbb235f3c7ec91a1c64bf0c88ba2\" returns successfully" Dec 16 03:30:35.904503 systemd-networkd[1463]: cali8972579303f: Gained IPv6LL Dec 16 03:30:35.947333 kubelet[3199]: I1216 03:30:35.947253 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bxg4k" podStartSLOduration=55.947225011 podStartE2EDuration="55.947225011s" podCreationTimestamp="2025-12-16 03:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:30:35.946947552 +0000 UTC m=+60.775882896" watchObservedRunningTime="2025-12-16 03:30:35.947225011 +0000 UTC m=+60.776160342" Dec 16 03:30:35.969698 containerd[1851]: time="2025-12-16T03:30:35.968391029Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:35.975313 containerd[1851]: time="2025-12-16T03:30:35.975267925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:35.975583 containerd[1851]: time="2025-12-16T03:30:35.975554047Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:30:35.976465 kubelet[3199]: E1216 03:30:35.976358 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:30:35.982234 kubelet[3199]: E1216 03:30:35.982132 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:30:35.985827 containerd[1851]: time="2025-12-16T03:30:35.985779024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:30:36.011575 kubelet[3199]: E1216 03:30:36.011389 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvbg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-95jjv_calico-system(1598e409-2173-4fc3-8415-b507d5511623): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:36.013304 kubelet[3199]: E1216 03:30:36.013183 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-95jjv" podUID="1598e409-2173-4fc3-8415-b507d5511623" Dec 16 03:30:36.041000 audit[5236]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=5236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:36.041000 audit[5236]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb6633c30 a2=0 a3=7ffcb6633c1c items=0 ppid=3301 pid=5236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:36.041000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:36.050000 audit[5236]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=5236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:36.050000 audit[5236]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcb6633c30 a2=0 a3=0 items=0 ppid=3301 pid=5236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:36.050000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:36.242788 containerd[1851]: time="2025-12-16T03:30:36.242551080Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:36.245335 containerd[1851]: time="2025-12-16T03:30:36.245260582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:36.245523 containerd[1851]: time="2025-12-16T03:30:36.245323084Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:30:36.245623 kubelet[3199]: E1216 03:30:36.245557 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:30:36.245623 kubelet[3199]: E1216 03:30:36.245601 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:30:36.245911 kubelet[3199]: E1216 03:30:36.245844 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68pqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6b698bdfc8-x42pm_calico-system(69c31889-528a-4f2f-822c-6d89b94291a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:36.247392 kubelet[3199]: E1216 03:30:36.247353 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" podUID="69c31889-528a-4f2f-822c-6d89b94291a9" Dec 16 03:30:36.248268 containerd[1851]: time="2025-12-16T03:30:36.248187114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:30:36.352667 systemd-networkd[1463]: cali18bb85df3b5: Gained IPv6LL Dec 16 03:30:36.522781 containerd[1851]: time="2025-12-16T03:30:36.522628784Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:36.525030 containerd[1851]: time="2025-12-16T03:30:36.524977323Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:30:36.525147 containerd[1851]: time="2025-12-16T03:30:36.525071216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:36.525357 kubelet[3199]: E1216 03:30:36.525319 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:30:36.526272 kubelet[3199]: E1216 03:30:36.525539 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:30:36.526272 kubelet[3199]: E1216 03:30:36.525739 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:df40755d990c4efb9d838aa3445cd5a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hzn98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9fd54f5-5npkd_calico-system(a7663fa2-6c23-44c9-b4f7-07ab48404e5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:36.526702 containerd[1851]: time="2025-12-16T03:30:36.526607229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:30:36.808472 containerd[1851]: time="2025-12-16T03:30:36.808335401Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:36.810695 containerd[1851]: time="2025-12-16T03:30:36.810616914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:30:36.810893 containerd[1851]: time="2025-12-16T03:30:36.810713036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:36.810940 kubelet[3199]: E1216 03:30:36.810903 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:30:36.810988 kubelet[3199]: E1216 03:30:36.810949 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:30:36.811872 kubelet[3199]: E1216 03:30:36.811230 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmrnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pl4wm_calico-system(39c8b25e-ea78-443a-855e-e43746267826): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:36.812756 containerd[1851]: time="2025-12-16T03:30:36.811454392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:30:36.930404 kubelet[3199]: E1216 03:30:36.929902 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" podUID="69c31889-528a-4f2f-822c-6d89b94291a9" Dec 16 03:30:36.930852 kubelet[3199]: E1216 03:30:36.930821 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-95jjv" podUID="1598e409-2173-4fc3-8415-b507d5511623" Dec 16 03:30:36.966000 audit[5241]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5241 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:36.966000 audit[5241]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff3e506470 a2=0 a3=7fff3e50645c items=0 ppid=3301 pid=5241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:36.966000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:36.977000 audit[5241]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=5241 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:36.977000 audit[5241]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff3e506470 a2=0 a3=0 items=0 ppid=3301 pid=5241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:36.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:37.004000 audit[5243]: NETFILTER_CFG table=filter:133 family=2 entries=17 op=nft_register_rule pid=5243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:37.004000 audit[5243]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffefcf11dc0 a2=0 a3=7ffefcf11dac items=0 ppid=3301 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:37.004000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:37.011000 audit[5243]: NETFILTER_CFG table=nat:134 family=2 entries=35 op=nft_register_chain pid=5243 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:37.011000 audit[5243]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffefcf11dc0 a2=0 a3=7ffefcf11dac items=0 ppid=3301 pid=5243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:37.011000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:37.046561 containerd[1851]: time="2025-12-16T03:30:37.046499933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:37.048846 containerd[1851]: time="2025-12-16T03:30:37.048735657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:30:37.048846 containerd[1851]: time="2025-12-16T03:30:37.048811020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:37.049004 kubelet[3199]: E1216 03:30:37.048973 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:30:37.049449 kubelet[3199]: E1216 03:30:37.049071 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:30:37.050255 kubelet[3199]: E1216 03:30:37.049474 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzn98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9fd54f5-5npkd_calico-system(a7663fa2-6c23-44c9-b4f7-07ab48404e5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:37.050460 containerd[1851]: time="2025-12-16T03:30:37.049791539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:30:37.051478 kubelet[3199]: E1216 03:30:37.051408 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9fd54f5-5npkd" podUID="a7663fa2-6c23-44c9-b4f7-07ab48404e5b" Dec 16 03:30:37.330392 containerd[1851]: time="2025-12-16T03:30:37.330343025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:37.332620 containerd[1851]: time="2025-12-16T03:30:37.332489866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:30:37.332620 containerd[1851]: time="2025-12-16T03:30:37.332549113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:37.332879 kubelet[3199]: E1216 03:30:37.332832 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:30:37.333082 kubelet[3199]: E1216 03:30:37.332894 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:30:37.333144 kubelet[3199]: E1216 03:30:37.333034 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmrnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pl4wm_calico-system(39c8b25e-ea78-443a-855e-e43746267826): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:37.334352 kubelet[3199]: E1216 03:30:37.334282 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:30:37.440380 systemd-networkd[1463]: cali2ba8603d83b: Gained IPv6LL Dec 16 03:30:37.930104 kubelet[3199]: E1216 03:30:37.929920 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:30:37.932832 kubelet[3199]: E1216 03:30:37.931747 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9fd54f5-5npkd" podUID="a7663fa2-6c23-44c9-b4f7-07ab48404e5b" Dec 16 03:30:37.981000 audit[5245]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=5245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:37.983835 kernel: kauditd_printk_skb: 350 callbacks suppressed Dec 16 03:30:37.984070 kernel: audit: type=1325 audit(1765855837.981:722): table=filter:135 family=2 entries=14 op=nft_register_rule pid=5245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:37.981000 audit[5245]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc9beb5e80 a2=0 a3=7ffc9beb5e6c items=0 ppid=3301 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:37.987614 kernel: audit: type=1300 audit(1765855837.981:722): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc9beb5e80 a2=0 a3=7ffc9beb5e6c items=0 ppid=3301 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:37.981000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:37.992634 kernel: audit: type=1327 audit(1765855837.981:722): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:37.988000 audit[5245]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:37.997227 kernel: audit: type=1325 audit(1765855837.988:723): table=nat:136 family=2 entries=20 op=nft_register_rule pid=5245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:37.997331 kernel: audit: type=1300 audit(1765855837.988:723): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc9beb5e80 a2=0 a3=7ffc9beb5e6c items=0 ppid=3301 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:37.988000 audit[5245]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc9beb5e80 a2=0 a3=7ffc9beb5e6c items=0 ppid=3301 pid=5245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:37.988000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:38.003548 kernel: audit: type=1327 audit(1765855837.988:723): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:39.490001 ntpd[1826]: Listen normally on 6 vxlan.calico 192.168.14.192:123 Dec 16 03:30:39.490071 ntpd[1826]: Listen normally on 7 vxlan.calico [fe80::64e5:19ff:febf:e2e6%4]:123 Dec 16 03:30:39.490678 ntpd[1826]: 16 Dec 03:30:39 ntpd[1826]: Listen normally on 6 vxlan.calico 192.168.14.192:123 Dec 16 03:30:39.490678 ntpd[1826]: 16 Dec 03:30:39 ntpd[1826]: Listen normally on 7 vxlan.calico [fe80::64e5:19ff:febf:e2e6%4]:123 Dec 16 03:30:39.490678 ntpd[1826]: 16 Dec 03:30:39 ntpd[1826]: Listen normally on 8 cali3eb4588be1a [fe80::ecee:eeff:feee:eeee%7]:123 Dec 16 03:30:39.490678 ntpd[1826]: 16 Dec 03:30:39 ntpd[1826]: Listen normally on 9 caliae94256cd47 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 03:30:39.490678 ntpd[1826]: 16 Dec 03:30:39 ntpd[1826]: Listen normally on 10 cali18bb85df3b5 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 03:30:39.490678 ntpd[1826]: 16 Dec 03:30:39 ntpd[1826]: Listen normally on 11 cali8972579303f [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 03:30:39.490678 ntpd[1826]: 16 Dec 03:30:39 ntpd[1826]: Listen normally on 12 cali2ba8603d83b [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 03:30:39.490102 ntpd[1826]: Listen normally on 8 cali3eb4588be1a [fe80::ecee:eeff:feee:eeee%7]:123 Dec 16 03:30:39.490127 ntpd[1826]: Listen normally on 9 caliae94256cd47 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 03:30:39.490234 ntpd[1826]: Listen normally on 10 cali18bb85df3b5 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 03:30:39.490268 ntpd[1826]: Listen normally on 11 cali8972579303f [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 03:30:39.490297 ntpd[1826]: Listen normally on 12 cali2ba8603d83b [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 03:30:41.379398 containerd[1851]: time="2025-12-16T03:30:41.379356665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lgkx9,Uid:56b458df-a303-4716-b4d2-e294c54eff31,Namespace:kube-system,Attempt:0,}" Dec 16 03:30:41.609078 (udev-worker)[5273]: Network interface NamePolicy= disabled on kernel command line. Dec 16 03:30:41.610865 systemd-networkd[1463]: cali00025de27dc: Link UP Dec 16 03:30:41.612472 systemd-networkd[1463]: cali00025de27dc: Gained carrier Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.522 [INFO][5254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--117-k8s-coredns--668d6bf9bc--lgkx9-eth0 coredns-668d6bf9bc- kube-system 56b458df-a303-4716-b4d2-e294c54eff31 819 0 2025-12-16 03:29:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-117 coredns-668d6bf9bc-lgkx9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali00025de27dc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgkx9" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--lgkx9-" Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.523 [INFO][5254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgkx9" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--lgkx9-eth0" Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.551 [INFO][5266] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" HandleID="k8s-pod-network.804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" Workload="ip--172--31--30--117-k8s-coredns--668d6bf9bc--lgkx9-eth0" Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.551 [INFO][5266] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" HandleID="k8s-pod-network.804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" Workload="ip--172--31--30--117-k8s-coredns--668d6bf9bc--lgkx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-117", "pod":"coredns-668d6bf9bc-lgkx9", "timestamp":"2025-12-16 03:30:41.551494813 +0000 UTC"}, Hostname:"ip-172-31-30-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.551 [INFO][5266] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.551 [INFO][5266] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.552 [INFO][5266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-117' Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.561 [INFO][5266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" host="ip-172-31-30-117" Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.566 [INFO][5266] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-117" Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.573 [INFO][5266] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.575 [INFO][5266] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.580 [INFO][5266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.581 [INFO][5266] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" host="ip-172-31-30-117" Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.583 [INFO][5266] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589 Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.591 [INFO][5266] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" host="ip-172-31-30-117" Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.598 [INFO][5266] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.198/26] block=192.168.14.192/26 handle="k8s-pod-network.804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" host="ip-172-31-30-117" Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.598 [INFO][5266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.198/26] handle="k8s-pod-network.804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" host="ip-172-31-30-117" Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.598 [INFO][5266] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:30:41.642266 containerd[1851]: 2025-12-16 03:30:41.598 [INFO][5266] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.198/26] IPv6=[] ContainerID="804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" HandleID="k8s-pod-network.804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" Workload="ip--172--31--30--117-k8s-coredns--668d6bf9bc--lgkx9-eth0" Dec 16 03:30:41.642882 containerd[1851]: 2025-12-16 03:30:41.602 [INFO][5254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgkx9" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--lgkx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-coredns--668d6bf9bc--lgkx9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"56b458df-a303-4716-b4d2-e294c54eff31", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 29, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"", Pod:"coredns-668d6bf9bc-lgkx9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00025de27dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:41.642882 containerd[1851]: 2025-12-16 03:30:41.602 [INFO][5254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.198/32] ContainerID="804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgkx9" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--lgkx9-eth0" Dec 16 03:30:41.642882 containerd[1851]: 2025-12-16 03:30:41.602 [INFO][5254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00025de27dc ContainerID="804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgkx9" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--lgkx9-eth0" Dec 16 03:30:41.642882 containerd[1851]: 2025-12-16 03:30:41.613 [INFO][5254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgkx9" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--lgkx9-eth0" Dec 16 03:30:41.642882 containerd[1851]: 2025-12-16 03:30:41.614 [INFO][5254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgkx9" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--lgkx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-coredns--668d6bf9bc--lgkx9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"56b458df-a303-4716-b4d2-e294c54eff31", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 29, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589", Pod:"coredns-668d6bf9bc-lgkx9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00025de27dc", MAC:"06:7e:43:9e:55:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:41.642882 containerd[1851]: 2025-12-16 03:30:41.635 [INFO][5254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" Namespace="kube-system" Pod="coredns-668d6bf9bc-lgkx9" WorkloadEndpoint="ip--172--31--30--117-k8s-coredns--668d6bf9bc--lgkx9-eth0" Dec 16 03:30:41.661000 audit[5284]: NETFILTER_CFG table=filter:137 family=2 entries=48 op=nft_register_chain pid=5284 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:30:41.661000 audit[5284]: SYSCALL arch=c000003e syscall=46 success=yes exit=22720 a0=3 a1=7fff26e9fbe0 a2=0 a3=7fff26e9fbcc items=0 ppid=4700 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.668051 kernel: audit: type=1325 audit(1765855841.661:724): table=filter:137 family=2 entries=48 op=nft_register_chain pid=5284 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:30:41.668123 kernel: audit: type=1300 audit(1765855841.661:724): arch=c000003e syscall=46 success=yes exit=22720 a0=3 a1=7fff26e9fbe0 a2=0 a3=7fff26e9fbcc items=0 ppid=4700 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.661000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:30:41.673619 kernel: audit: type=1327 audit(1765855841.661:724): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:30:41.718394 containerd[1851]: time="2025-12-16T03:30:41.718295282Z" level=info msg="connecting to shim 804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589" address="unix:///run/containerd/s/fed9e7f7b0d4df974e449cb69831fdb222cf66b00e473f1840e53b9b1bfbd53d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:30:41.755437 systemd[1]: Started cri-containerd-804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589.scope - libcontainer container 804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589. Dec 16 03:30:41.773358 kernel: audit: type=1334 audit(1765855841.770:725): prog-id=246 op=LOAD Dec 16 03:30:41.770000 audit: BPF prog-id=246 op=LOAD Dec 16 03:30:41.771000 audit: BPF prog-id=247 op=LOAD Dec 16 03:30:41.771000 audit[5305]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5294 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830346238353266383363356332623365366538653763653361353333 Dec 16 03:30:41.771000 audit: BPF prog-id=247 op=UNLOAD Dec 16 03:30:41.771000 audit[5305]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5294 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830346238353266383363356332623365366538653763653361353333 Dec 16 03:30:41.772000 audit: BPF prog-id=248 op=LOAD Dec 16 03:30:41.772000 audit[5305]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5294 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830346238353266383363356332623365366538653763653361353333 Dec 16 03:30:41.772000 audit: BPF prog-id=249 op=LOAD Dec 16 03:30:41.772000 audit[5305]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5294 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830346238353266383363356332623365366538653763653361353333 Dec 16 03:30:41.772000 audit: BPF prog-id=249 op=UNLOAD Dec 16 03:30:41.772000 audit[5305]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5294 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830346238353266383363356332623365366538653763653361353333 Dec 16 03:30:41.772000 audit: BPF prog-id=248 op=UNLOAD Dec 16 03:30:41.772000 audit[5305]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5294 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830346238353266383363356332623365366538653763653361353333 Dec 16 03:30:41.772000 audit: BPF prog-id=250 op=LOAD Dec 16 03:30:41.772000 audit[5305]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5294 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830346238353266383363356332623365366538653763653361353333 Dec 16 03:30:41.824699 containerd[1851]: time="2025-12-16T03:30:41.824656647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lgkx9,Uid:56b458df-a303-4716-b4d2-e294c54eff31,Namespace:kube-system,Attempt:0,} returns sandbox id \"804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589\"" Dec 16 03:30:41.854200 containerd[1851]: time="2025-12-16T03:30:41.854122024Z" level=info msg="CreateContainer within sandbox \"804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:30:41.870979 containerd[1851]: time="2025-12-16T03:30:41.870318480Z" level=info msg="Container 25295f13e0082d966a1346345ecd06e8f3815223eabc4706bbd3e4ccd0bf9079: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:30:41.893815 containerd[1851]: time="2025-12-16T03:30:41.893602880Z" level=info msg="CreateContainer within sandbox \"804b852f83c5c2b3e6e8e7ce3a53398f6d11b8440269631b3cff36294247c589\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25295f13e0082d966a1346345ecd06e8f3815223eabc4706bbd3e4ccd0bf9079\"" Dec 16 03:30:41.895789 containerd[1851]: time="2025-12-16T03:30:41.895391329Z" level=info msg="StartContainer for \"25295f13e0082d966a1346345ecd06e8f3815223eabc4706bbd3e4ccd0bf9079\"" Dec 16 03:30:41.901147 containerd[1851]: time="2025-12-16T03:30:41.900800619Z" level=info msg="connecting to shim 25295f13e0082d966a1346345ecd06e8f3815223eabc4706bbd3e4ccd0bf9079" address="unix:///run/containerd/s/fed9e7f7b0d4df974e449cb69831fdb222cf66b00e473f1840e53b9b1bfbd53d" protocol=ttrpc version=3 Dec 16 03:30:41.935598 systemd[1]: Started cri-containerd-25295f13e0082d966a1346345ecd06e8f3815223eabc4706bbd3e4ccd0bf9079.scope - libcontainer container 25295f13e0082d966a1346345ecd06e8f3815223eabc4706bbd3e4ccd0bf9079. Dec 16 03:30:41.958000 audit: BPF prog-id=251 op=LOAD Dec 16 03:30:41.958000 audit: BPF prog-id=252 op=LOAD Dec 16 03:30:41.958000 audit[5329]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5294 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235323935663133653030383264393636613133343633343565636430 Dec 16 03:30:41.959000 audit: BPF prog-id=252 op=UNLOAD Dec 16 03:30:41.959000 audit[5329]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5294 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235323935663133653030383264393636613133343633343565636430 Dec 16 03:30:41.959000 audit: BPF prog-id=253 op=LOAD Dec 16 03:30:41.959000 audit[5329]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5294 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235323935663133653030383264393636613133343633343565636430 Dec 16 03:30:41.959000 audit: BPF prog-id=254 op=LOAD Dec 16 03:30:41.959000 audit[5329]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5294 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235323935663133653030383264393636613133343633343565636430 Dec 16 03:30:41.959000 audit: BPF prog-id=254 op=UNLOAD Dec 16 03:30:41.959000 audit[5329]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5294 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235323935663133653030383264393636613133343633343565636430 Dec 16 03:30:41.959000 audit: BPF prog-id=253 op=UNLOAD Dec 16 03:30:41.959000 audit[5329]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5294 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235323935663133653030383264393636613133343633343565636430 Dec 16 03:30:41.959000 audit: BPF prog-id=255 op=LOAD Dec 16 03:30:41.959000 audit[5329]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5294 pid=5329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:41.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235323935663133653030383264393636613133343633343565636430 Dec 16 03:30:41.985715 containerd[1851]: time="2025-12-16T03:30:41.985550806Z" level=info msg="StartContainer for \"25295f13e0082d966a1346345ecd06e8f3815223eabc4706bbd3e4ccd0bf9079\" returns successfully" Dec 16 03:30:42.379369 containerd[1851]: time="2025-12-16T03:30:42.379131807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f46b988d5-dv5td,Uid:0feff437-5720-498b-a3c1-fbae9f5f245c,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:30:42.636355 systemd-networkd[1463]: cali6eadff203fe: Link UP Dec 16 03:30:42.638116 systemd-networkd[1463]: cali6eadff203fe: Gained carrier Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.462 [INFO][5367] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--dv5td-eth0 calico-apiserver-5f46b988d5- calico-apiserver 0feff437-5720-498b-a3c1-fbae9f5f245c 825 0 2025-12-16 03:29:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f46b988d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-117 calico-apiserver-5f46b988d5-dv5td eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6eadff203fe [] [] }} ContainerID="d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-dv5td" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--dv5td-" Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.464 [INFO][5367] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-dv5td" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--dv5td-eth0" Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.559 [INFO][5377] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" HandleID="k8s-pod-network.d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" Workload="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--dv5td-eth0" Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.559 [INFO][5377] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" HandleID="k8s-pod-network.d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" Workload="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--dv5td-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032ba00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-117", "pod":"calico-apiserver-5f46b988d5-dv5td", "timestamp":"2025-12-16 03:30:42.559284166 +0000 UTC"}, Hostname:"ip-172-31-30-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.559 [INFO][5377] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.559 [INFO][5377] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.559 [INFO][5377] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-117' Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.573 [INFO][5377] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" host="ip-172-31-30-117" Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.585 [INFO][5377] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-117" Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.595 [INFO][5377] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.598 [INFO][5377] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.602 [INFO][5377] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.602 [INFO][5377] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" host="ip-172-31-30-117" Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.604 [INFO][5377] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.613 [INFO][5377] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" host="ip-172-31-30-117" Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.627 [INFO][5377] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.199/26] block=192.168.14.192/26 handle="k8s-pod-network.d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" host="ip-172-31-30-117" Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.628 [INFO][5377] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.199/26] handle="k8s-pod-network.d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" host="ip-172-31-30-117" Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.628 [INFO][5377] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:30:42.660218 containerd[1851]: 2025-12-16 03:30:42.628 [INFO][5377] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.199/26] IPv6=[] ContainerID="d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" HandleID="k8s-pod-network.d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" Workload="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--dv5td-eth0" Dec 16 03:30:42.664898 containerd[1851]: 2025-12-16 03:30:42.632 [INFO][5367] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-dv5td" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--dv5td-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--dv5td-eth0", GenerateName:"calico-apiserver-5f46b988d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0feff437-5720-498b-a3c1-fbae9f5f245c", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f46b988d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"", Pod:"calico-apiserver-5f46b988d5-dv5td", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6eadff203fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:42.664898 containerd[1851]: 2025-12-16 03:30:42.632 [INFO][5367] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.199/32] ContainerID="d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-dv5td" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--dv5td-eth0" Dec 16 03:30:42.664898 containerd[1851]: 2025-12-16 03:30:42.632 [INFO][5367] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6eadff203fe ContainerID="d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-dv5td" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--dv5td-eth0" Dec 16 03:30:42.664898 containerd[1851]: 2025-12-16 03:30:42.638 [INFO][5367] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-dv5td" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--dv5td-eth0" Dec 16 03:30:42.664898 containerd[1851]: 2025-12-16 03:30:42.638 [INFO][5367] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-dv5td" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--dv5td-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--dv5td-eth0", GenerateName:"calico-apiserver-5f46b988d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0feff437-5720-498b-a3c1-fbae9f5f245c", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f46b988d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b", Pod:"calico-apiserver-5f46b988d5-dv5td", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6eadff203fe", MAC:"de:d9:65:8e:38:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:42.664898 containerd[1851]: 2025-12-16 03:30:42.655 [INFO][5367] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-dv5td" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--dv5td-eth0" Dec 16 03:30:42.705657 containerd[1851]: time="2025-12-16T03:30:42.705578536Z" level=info msg="connecting to shim d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b" address="unix:///run/containerd/s/222299adaa1c3832c05a9aa3f81085e0dcfd0bbe4aa28845dc62a9eb3b844dd5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:30:42.709000 audit[5391]: NETFILTER_CFG table=filter:138 family=2 entries=70 op=nft_register_chain pid=5391 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:30:42.709000 audit[5391]: SYSCALL arch=c000003e syscall=46 success=yes exit=34148 a0=3 a1=7ffc0dfa46e0 a2=0 a3=7ffc0dfa46cc items=0 ppid=4700 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:42.709000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:30:42.741706 systemd[1]: Started cri-containerd-d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b.scope - libcontainer container d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b. Dec 16 03:30:42.756000 audit: BPF prog-id=256 op=LOAD Dec 16 03:30:42.757000 audit: BPF prog-id=257 op=LOAD Dec 16 03:30:42.757000 audit[5411]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5401 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:42.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313362303336356130353065363365343866343661623138336663 Dec 16 03:30:42.757000 audit: BPF prog-id=257 op=UNLOAD Dec 16 03:30:42.757000 audit[5411]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5401 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:42.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313362303336356130353065363365343866343661623138336663 Dec 16 03:30:42.757000 audit: BPF prog-id=258 op=LOAD Dec 16 03:30:42.757000 audit[5411]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=5401 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:42.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313362303336356130353065363365343866343661623138336663 Dec 16 03:30:42.757000 audit: BPF prog-id=259 op=LOAD Dec 16 03:30:42.757000 audit[5411]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=5401 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:42.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313362303336356130353065363365343866343661623138336663 Dec 16 03:30:42.757000 audit: BPF prog-id=259 op=UNLOAD Dec 16 03:30:42.757000 audit[5411]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5401 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:42.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313362303336356130353065363365343866343661623138336663 Dec 16 03:30:42.757000 audit: BPF prog-id=258 op=UNLOAD Dec 16 03:30:42.757000 audit[5411]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5401 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:42.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313362303336356130353065363365343866343661623138336663 Dec 16 03:30:42.757000 audit: BPF prog-id=260 op=LOAD Dec 16 03:30:42.757000 audit[5411]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=5401 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:42.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313362303336356130353065363365343866343661623138336663 Dec 16 03:30:42.802134 containerd[1851]: time="2025-12-16T03:30:42.802083048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f46b988d5-dv5td,Uid:0feff437-5720-498b-a3c1-fbae9f5f245c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d313b0365a050e63e48f46ab183fcee52ff8639c8206a61d1debcc7412f63b2b\"" Dec 16 03:30:42.804218 containerd[1851]: time="2025-12-16T03:30:42.804184462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:30:42.944779 systemd-networkd[1463]: cali00025de27dc: Gained IPv6LL Dec 16 03:30:43.002323 kubelet[3199]: I1216 03:30:43.002234 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lgkx9" podStartSLOduration=63.002209632 podStartE2EDuration="1m3.002209632s" podCreationTimestamp="2025-12-16 03:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:30:42.980975782 +0000 UTC m=+67.809911123" watchObservedRunningTime="2025-12-16 03:30:43.002209632 +0000 UTC m=+67.831144962" Dec 16 03:30:43.005000 audit[5438]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:43.011978 kernel: kauditd_printk_skb: 68 callbacks suppressed Dec 16 03:30:43.012085 kernel: audit: type=1325 audit(1765855843.005:750): table=filter:139 family=2 entries=14 op=nft_register_rule pid=5438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:43.005000 audit[5438]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffff55948c0 a2=0 a3=7ffff55948ac items=0 ppid=3301 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:43.019223 kernel: audit: type=1300 audit(1765855843.005:750): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffff55948c0 a2=0 a3=7ffff55948ac items=0 ppid=3301 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:43.005000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:43.026838 kernel: audit: type=1327 audit(1765855843.005:750): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:43.026952 kernel: audit: type=1325 audit(1765855843.013:751): table=nat:140 family=2 entries=44 op=nft_register_rule pid=5438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:43.013000 audit[5438]: NETFILTER_CFG table=nat:140 family=2 entries=44 op=nft_register_rule pid=5438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:43.034735 kernel: audit: type=1300 audit(1765855843.013:751): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffff55948c0 a2=0 a3=7ffff55948ac items=0 ppid=3301 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:43.013000 audit[5438]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffff55948c0 a2=0 a3=7ffff55948ac items=0 ppid=3301 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:43.038217 kernel: audit: type=1327 audit(1765855843.013:751): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:43.013000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:43.047000 audit[5440]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5440 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:43.057940 kernel: audit: type=1325 audit(1765855843.047:752): table=filter:141 family=2 entries=14 op=nft_register_rule pid=5440 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:43.058045 kernel: audit: type=1300 audit(1765855843.047:752): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdd18916f0 a2=0 a3=7ffdd18916dc items=0 ppid=3301 pid=5440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:43.047000 audit[5440]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdd18916f0 a2=0 a3=7ffdd18916dc items=0 ppid=3301 pid=5440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:43.047000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:43.061233 kernel: audit: type=1327 audit(1765855843.047:752): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:43.126396 containerd[1851]: time="2025-12-16T03:30:43.126322795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:43.129045 containerd[1851]: time="2025-12-16T03:30:43.128806481Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:30:43.129045 containerd[1851]: time="2025-12-16T03:30:43.128926935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:43.129325 kubelet[3199]: E1216 03:30:43.129116 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:30:43.129325 kubelet[3199]: E1216 03:30:43.129156 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:30:43.129418 kubelet[3199]: E1216 03:30:43.129326 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k88lr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f46b988d5-dv5td_calico-apiserver(0feff437-5720-498b-a3c1-fbae9f5f245c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:43.130898 kubelet[3199]: E1216 03:30:43.130833 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" podUID="0feff437-5720-498b-a3c1-fbae9f5f245c" Dec 16 03:30:43.175000 audit[5440]: NETFILTER_CFG table=nat:142 family=2 entries=56 op=nft_register_chain pid=5440 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:43.175000 audit[5440]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdd18916f0 a2=0 a3=7ffdd18916dc items=0 ppid=3301 pid=5440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:43.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:43.180335 kernel: audit: type=1325 audit(1765855843.175:753): table=nat:142 family=2 entries=56 op=nft_register_chain pid=5440 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:43.969432 systemd-networkd[1463]: cali6eadff203fe: Gained IPv6LL Dec 16 03:30:43.973515 kubelet[3199]: E1216 03:30:43.973235 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" podUID="0feff437-5720-498b-a3c1-fbae9f5f245c" Dec 16 03:30:44.014000 audit[5443]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5443 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:44.014000 audit[5443]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd30dbbc10 a2=0 a3=7ffd30dbbbfc items=0 ppid=3301 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:44.014000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:44.020000 audit[5443]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5443 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:44.020000 audit[5443]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd30dbbc10 a2=0 a3=7ffd30dbbbfc items=0 ppid=3301 pid=5443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:44.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:44.384095 containerd[1851]: time="2025-12-16T03:30:44.382406271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f46b988d5-xtbpb,Uid:e32d83e7-8260-42ce-a13a-b8e2a7a65181,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:30:44.537626 systemd-networkd[1463]: cali3fb75ec7fa9: Link UP Dec 16 03:30:44.539342 systemd-networkd[1463]: cali3fb75ec7fa9: Gained carrier Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.443 [INFO][5451] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--xtbpb-eth0 calico-apiserver-5f46b988d5- calico-apiserver e32d83e7-8260-42ce-a13a-b8e2a7a65181 828 0 2025-12-16 03:29:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f46b988d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-117 calico-apiserver-5f46b988d5-xtbpb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3fb75ec7fa9 [] [] }} ContainerID="7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-xtbpb" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--xtbpb-" Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.444 [INFO][5451] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-xtbpb" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--xtbpb-eth0" Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.476 [INFO][5462] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" HandleID="k8s-pod-network.7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" Workload="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--xtbpb-eth0" Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.476 [INFO][5462] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" HandleID="k8s-pod-network.7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" Workload="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--xtbpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5220), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-117", "pod":"calico-apiserver-5f46b988d5-xtbpb", "timestamp":"2025-12-16 03:30:44.476587125 +0000 UTC"}, Hostname:"ip-172-31-30-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.477 [INFO][5462] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.477 [INFO][5462] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.477 [INFO][5462] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-117' Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.487 [INFO][5462] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" host="ip-172-31-30-117" Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.495 [INFO][5462] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-117" Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.502 [INFO][5462] ipam/ipam.go 511: Trying affinity for 192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.505 [INFO][5462] ipam/ipam.go 158: Attempting to load block cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.508 [INFO][5462] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="ip-172-31-30-117" Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.508 [INFO][5462] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" host="ip-172-31-30-117" Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.511 [INFO][5462] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.517 [INFO][5462] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" host="ip-172-31-30-117" Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.528 [INFO][5462] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.14.200/26] block=192.168.14.192/26 handle="k8s-pod-network.7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" host="ip-172-31-30-117" Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.528 [INFO][5462] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.14.200/26] handle="k8s-pod-network.7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" host="ip-172-31-30-117" Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.528 [INFO][5462] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:30:44.566009 containerd[1851]: 2025-12-16 03:30:44.528 [INFO][5462] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.14.200/26] IPv6=[] ContainerID="7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" HandleID="k8s-pod-network.7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" Workload="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--xtbpb-eth0" Dec 16 03:30:44.569037 containerd[1851]: 2025-12-16 03:30:44.532 [INFO][5451] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-xtbpb" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--xtbpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--xtbpb-eth0", GenerateName:"calico-apiserver-5f46b988d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e32d83e7-8260-42ce-a13a-b8e2a7a65181", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f46b988d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"", Pod:"calico-apiserver-5f46b988d5-xtbpb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fb75ec7fa9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:44.569037 containerd[1851]: 2025-12-16 03:30:44.533 [INFO][5451] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.14.200/32] ContainerID="7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-xtbpb" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--xtbpb-eth0" Dec 16 03:30:44.569037 containerd[1851]: 2025-12-16 03:30:44.533 [INFO][5451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fb75ec7fa9 ContainerID="7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-xtbpb" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--xtbpb-eth0" Dec 16 03:30:44.569037 containerd[1851]: 2025-12-16 03:30:44.540 [INFO][5451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-xtbpb" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--xtbpb-eth0" Dec 16 03:30:44.569037 containerd[1851]: 2025-12-16 03:30:44.541 [INFO][5451] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-xtbpb" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--xtbpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--xtbpb-eth0", GenerateName:"calico-apiserver-5f46b988d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"e32d83e7-8260-42ce-a13a-b8e2a7a65181", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f46b988d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-117", ContainerID:"7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a", Pod:"calico-apiserver-5f46b988d5-xtbpb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3fb75ec7fa9", MAC:"1a:c2:21:f0:6b:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:30:44.569037 containerd[1851]: 2025-12-16 03:30:44.559 [INFO][5451] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" Namespace="calico-apiserver" Pod="calico-apiserver-5f46b988d5-xtbpb" WorkloadEndpoint="ip--172--31--30--117-k8s-calico--apiserver--5f46b988d5--xtbpb-eth0" Dec 16 03:30:44.612000 audit[5477]: NETFILTER_CFG table=filter:145 family=2 entries=67 op=nft_register_chain pid=5477 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:30:44.612000 audit[5477]: SYSCALL arch=c000003e syscall=46 success=yes exit=31868 a0=3 a1=7fffd12e1e50 a2=0 a3=7fffd12e1e3c items=0 ppid=4700 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:44.612000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:30:44.623197 containerd[1851]: time="2025-12-16T03:30:44.621375515Z" level=info msg="connecting to shim 7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a" address="unix:///run/containerd/s/c653e92e2eee1d9e4ed6267e3f0c4d7c39519daa9c8e4600142ac8389d1a2673" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:30:44.659455 systemd[1]: Started cri-containerd-7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a.scope - libcontainer container 7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a. Dec 16 03:30:44.676000 audit: BPF prog-id=261 op=LOAD Dec 16 03:30:44.676000 audit: BPF prog-id=262 op=LOAD Dec 16 03:30:44.676000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5487 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:44.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764386664356138316435643139313161663533336663306234333463 Dec 16 03:30:44.676000 audit: BPF prog-id=262 op=UNLOAD Dec 16 03:30:44.676000 audit[5498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5487 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:44.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764386664356138316435643139313161663533336663306234333463 Dec 16 03:30:44.676000 audit: BPF prog-id=263 op=LOAD Dec 16 03:30:44.676000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5487 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:44.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764386664356138316435643139313161663533336663306234333463 Dec 16 03:30:44.676000 audit: BPF prog-id=264 op=LOAD Dec 16 03:30:44.676000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5487 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:44.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764386664356138316435643139313161663533336663306234333463 Dec 16 03:30:44.676000 audit: BPF prog-id=264 op=UNLOAD Dec 16 03:30:44.676000 audit[5498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5487 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:44.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764386664356138316435643139313161663533336663306234333463 Dec 16 03:30:44.677000 audit: BPF prog-id=263 op=UNLOAD Dec 16 03:30:44.677000 audit[5498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5487 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:44.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764386664356138316435643139313161663533336663306234333463 Dec 16 03:30:44.677000 audit: BPF prog-id=265 op=LOAD Dec 16 03:30:44.677000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5487 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:44.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764386664356138316435643139313161663533336663306234333463 Dec 16 03:30:44.725196 containerd[1851]: time="2025-12-16T03:30:44.725152712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f46b988d5-xtbpb,Uid:e32d83e7-8260-42ce-a13a-b8e2a7a65181,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7d8fd5a81d5d1911af533fc0b434c2fb1a1b8e1b8f4e36e26a6bad851754952a\"" Dec 16 03:30:44.727039 containerd[1851]: time="2025-12-16T03:30:44.726974978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:30:44.988145 containerd[1851]: time="2025-12-16T03:30:44.988096551Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:44.990512 containerd[1851]: time="2025-12-16T03:30:44.990447926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:44.992705 containerd[1851]: time="2025-12-16T03:30:44.992369210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:30:44.992953 kubelet[3199]: E1216 03:30:44.992896 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:30:44.995382 kubelet[3199]: E1216 03:30:44.992949 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:30:44.995382 kubelet[3199]: E1216 03:30:44.993138 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdx4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f46b988d5-xtbpb_calico-apiserver(e32d83e7-8260-42ce-a13a-b8e2a7a65181): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:44.995382 kubelet[3199]: E1216 03:30:44.995230 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" podUID="e32d83e7-8260-42ce-a13a-b8e2a7a65181" Dec 16 03:30:45.994987 kubelet[3199]: E1216 03:30:45.994926 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" podUID="e32d83e7-8260-42ce-a13a-b8e2a7a65181" Dec 16 03:30:46.019479 systemd-networkd[1463]: cali3fb75ec7fa9: Gained IPv6LL Dec 16 03:30:46.081000 audit[5523]: NETFILTER_CFG table=filter:146 family=2 entries=14 op=nft_register_rule pid=5523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:46.081000 audit[5523]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffef528f110 a2=0 a3=7ffef528f0fc items=0 ppid=3301 pid=5523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:46.081000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:46.088000 audit[5523]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:30:46.088000 audit[5523]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffef528f110 a2=0 a3=7ffef528f0fc items=0 ppid=3301 pid=5523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:30:46.088000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:30:47.381398 containerd[1851]: time="2025-12-16T03:30:47.381343497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:30:47.660446 containerd[1851]: time="2025-12-16T03:30:47.660310951Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:47.662623 containerd[1851]: time="2025-12-16T03:30:47.662521429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:30:47.662623 containerd[1851]: time="2025-12-16T03:30:47.662527290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:47.662950 kubelet[3199]: E1216 03:30:47.662795 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:30:47.662950 kubelet[3199]: E1216 03:30:47.662846 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:30:47.663622 kubelet[3199]: E1216 03:30:47.663031 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68pqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6b698bdfc8-x42pm_calico-system(69c31889-528a-4f2f-822c-6d89b94291a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:47.665004 kubelet[3199]: E1216 03:30:47.664962 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" podUID="69c31889-528a-4f2f-822c-6d89b94291a9" Dec 16 03:30:48.380266 containerd[1851]: time="2025-12-16T03:30:48.380012418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:30:48.489511 ntpd[1826]: Listen normally on 13 cali00025de27dc [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 03:30:48.489902 ntpd[1826]: 16 Dec 03:30:48 ntpd[1826]: Listen normally on 13 cali00025de27dc [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 03:30:48.489902 ntpd[1826]: 16 Dec 03:30:48 ntpd[1826]: Listen normally on 14 cali6eadff203fe [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 03:30:48.489902 ntpd[1826]: 16 Dec 03:30:48 ntpd[1826]: Listen normally on 15 cali3fb75ec7fa9 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 03:30:48.489564 ntpd[1826]: Listen normally on 14 cali6eadff203fe [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 03:30:48.489588 ntpd[1826]: Listen normally on 15 cali3fb75ec7fa9 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 03:30:48.820864 containerd[1851]: time="2025-12-16T03:30:48.820817184Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:48.823024 containerd[1851]: time="2025-12-16T03:30:48.822964916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:30:48.823386 containerd[1851]: time="2025-12-16T03:30:48.823063071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:48.823484 kubelet[3199]: E1216 03:30:48.823247 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:30:48.823925 kubelet[3199]: E1216 03:30:48.823492 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:30:48.823925 kubelet[3199]: E1216 03:30:48.823678 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvbg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-95jjv_calico-system(1598e409-2173-4fc3-8415-b507d5511623): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:48.825265 kubelet[3199]: E1216 03:30:48.825114 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-95jjv" podUID="1598e409-2173-4fc3-8415-b507d5511623" Dec 16 03:30:50.380433 containerd[1851]: time="2025-12-16T03:30:50.380130051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:30:50.613148 containerd[1851]: time="2025-12-16T03:30:50.613088739Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:50.615361 containerd[1851]: time="2025-12-16T03:30:50.615302028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:30:50.615653 containerd[1851]: time="2025-12-16T03:30:50.615410614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:50.615769 kubelet[3199]: E1216 03:30:50.615712 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:30:50.615769 kubelet[3199]: E1216 03:30:50.615762 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:30:50.616910 kubelet[3199]: E1216 03:30:50.615987 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmrnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pl4wm_calico-system(39c8b25e-ea78-443a-855e-e43746267826): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:50.620215 containerd[1851]: time="2025-12-16T03:30:50.620129583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:30:50.880145 containerd[1851]: time="2025-12-16T03:30:50.880080309Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:50.882536 containerd[1851]: time="2025-12-16T03:30:50.882398436Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:30:50.882536 containerd[1851]: time="2025-12-16T03:30:50.882449550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:50.882915 kubelet[3199]: E1216 03:30:50.882854 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:30:50.882915 kubelet[3199]: E1216 03:30:50.882910 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:30:50.883135 kubelet[3199]: E1216 03:30:50.883057 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmrnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pl4wm_calico-system(39c8b25e-ea78-443a-855e-e43746267826): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:50.884775 kubelet[3199]: E1216 03:30:50.884720 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:30:52.380811 containerd[1851]: time="2025-12-16T03:30:52.380564732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:30:52.664946 containerd[1851]: time="2025-12-16T03:30:52.664814849Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:52.667339 containerd[1851]: time="2025-12-16T03:30:52.667292103Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:30:52.667567 containerd[1851]: time="2025-12-16T03:30:52.667386965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:52.667967 kubelet[3199]: E1216 03:30:52.667757 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:30:52.668399 kubelet[3199]: E1216 03:30:52.668357 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:30:52.668861 kubelet[3199]: E1216 03:30:52.668501 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:df40755d990c4efb9d838aa3445cd5a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hzn98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9fd54f5-5npkd_calico-system(a7663fa2-6c23-44c9-b4f7-07ab48404e5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:52.671194 containerd[1851]: time="2025-12-16T03:30:52.671148832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:30:52.942918 containerd[1851]: time="2025-12-16T03:30:52.942780317Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:52.945154 containerd[1851]: time="2025-12-16T03:30:52.945016613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:30:52.945154 containerd[1851]: time="2025-12-16T03:30:52.945069637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:52.945458 kubelet[3199]: E1216 03:30:52.945355 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:30:52.945458 kubelet[3199]: E1216 03:30:52.945430 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:30:52.945684 kubelet[3199]: E1216 03:30:52.945629 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzn98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9fd54f5-5npkd_calico-system(a7663fa2-6c23-44c9-b4f7-07ab48404e5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:52.947312 kubelet[3199]: E1216 03:30:52.947259 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9fd54f5-5npkd" podUID="a7663fa2-6c23-44c9-b4f7-07ab48404e5b" Dec 16 03:30:56.380137 containerd[1851]: time="2025-12-16T03:30:56.380090275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:30:56.666477 containerd[1851]: time="2025-12-16T03:30:56.666336827Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:56.668549 containerd[1851]: time="2025-12-16T03:30:56.668495226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:30:56.668751 containerd[1851]: time="2025-12-16T03:30:56.668584147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:56.668800 kubelet[3199]: E1216 03:30:56.668729 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:30:56.668800 kubelet[3199]: E1216 03:30:56.668771 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:30:56.669233 kubelet[3199]: E1216 03:30:56.668892 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k88lr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f46b988d5-dv5td_calico-apiserver(0feff437-5720-498b-a3c1-fbae9f5f245c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:56.670096 kubelet[3199]: E1216 03:30:56.670060 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" podUID="0feff437-5720-498b-a3c1-fbae9f5f245c" Dec 16 03:30:58.381447 containerd[1851]: time="2025-12-16T03:30:58.381202192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:30:58.652476 containerd[1851]: time="2025-12-16T03:30:58.652021356Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:30:58.656206 containerd[1851]: time="2025-12-16T03:30:58.654628790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:30:58.656604 containerd[1851]: time="2025-12-16T03:30:58.656308735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:30:58.657227 kubelet[3199]: E1216 03:30:58.656796 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:30:58.657227 kubelet[3199]: E1216 03:30:58.656864 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:30:58.657675 kubelet[3199]: E1216 03:30:58.657507 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdx4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f46b988d5-xtbpb_calico-apiserver(e32d83e7-8260-42ce-a13a-b8e2a7a65181): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:30:58.658720 kubelet[3199]: E1216 03:30:58.658679 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" podUID="e32d83e7-8260-42ce-a13a-b8e2a7a65181" Dec 16 03:31:00.391279 kubelet[3199]: E1216 03:31:00.389374 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-95jjv" podUID="1598e409-2173-4fc3-8415-b507d5511623" Dec 16 03:31:00.391993 kubelet[3199]: E1216 03:31:00.390233 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" podUID="69c31889-528a-4f2f-822c-6d89b94291a9" Dec 16 03:31:02.388414 kubelet[3199]: E1216 03:31:02.388274 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:31:05.386298 kubelet[3199]: E1216 03:31:05.385227 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9fd54f5-5npkd" podUID="a7663fa2-6c23-44c9-b4f7-07ab48404e5b" Dec 16 03:31:09.309816 systemd[1]: Started sshd@7-172.31.30.117:22-147.75.109.163:34496.service - OpenSSH per-connection server daemon (147.75.109.163:34496). Dec 16 03:31:09.312626 kernel: kauditd_printk_skb: 39 callbacks suppressed Dec 16 03:31:09.312696 kernel: audit: type=1130 audit(1765855869.308:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.30.117:22-147.75.109.163:34496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:09.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.30.117:22-147.75.109.163:34496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:09.396208 kubelet[3199]: E1216 03:31:09.395372 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" podUID="0feff437-5720-498b-a3c1-fbae9f5f245c" Dec 16 03:31:09.580000 audit[5597]: USER_ACCT pid=5597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:09.583407 sshd[5597]: Accepted publickey for core from 147.75.109.163 port 34496 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:31:09.587286 kernel: audit: type=1101 audit(1765855869.580:768): pid=5597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:09.589625 sshd-session[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:31:09.584000 audit[5597]: CRED_ACQ pid=5597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:09.596481 kernel: audit: type=1103 audit(1765855869.584:769): pid=5597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:09.603232 kernel: audit: type=1006 audit(1765855869.584:770): pid=5597 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 16 03:31:09.584000 audit[5597]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd58d6d930 a2=3 a3=0 items=0 ppid=1 pid=5597 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:09.607998 systemd-logind[1837]: New session 9 of user core. Dec 16 03:31:09.611845 kernel: audit: type=1300 audit(1765855869.584:770): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd58d6d930 a2=3 a3=0 items=0 ppid=1 pid=5597 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:09.625209 kernel: audit: type=1327 audit(1765855869.584:770): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:09.584000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:09.624511 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 03:31:09.635000 audit[5597]: USER_START pid=5597 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:09.644693 kernel: audit: type=1105 audit(1765855869.635:771): pid=5597 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:09.648000 audit[5601]: CRED_ACQ pid=5601 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:09.655209 kernel: audit: type=1103 audit(1765855869.648:772): pid=5601 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:10.620632 sshd[5601]: Connection closed by 147.75.109.163 port 34496 Dec 16 03:31:10.622475 sshd-session[5597]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:10.630000 audit[5597]: USER_END pid=5597 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:10.635822 systemd[1]: sshd@7-172.31.30.117:22-147.75.109.163:34496.service: Deactivated successfully. Dec 16 03:31:10.640633 kernel: audit: type=1106 audit(1765855870.630:773): pid=5597 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:10.639804 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 03:31:10.630000 audit[5597]: CRED_DISP pid=5597 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:10.648859 kernel: audit: type=1104 audit(1765855870.630:774): pid=5597 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:10.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.30.117:22-147.75.109.163:34496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:10.650641 systemd-logind[1837]: Session 9 logged out. Waiting for processes to exit. Dec 16 03:31:10.654107 systemd-logind[1837]: Removed session 9. Dec 16 03:31:13.385361 containerd[1851]: time="2025-12-16T03:31:13.385312653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:31:13.385846 kubelet[3199]: E1216 03:31:13.385689 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" podUID="e32d83e7-8260-42ce-a13a-b8e2a7a65181" Dec 16 03:31:13.692303 containerd[1851]: time="2025-12-16T03:31:13.692144059Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:31:13.698219 containerd[1851]: time="2025-12-16T03:31:13.698065252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:31:13.698642 containerd[1851]: time="2025-12-16T03:31:13.698126947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:31:13.699106 kubelet[3199]: E1216 03:31:13.698985 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:31:13.699319 kubelet[3199]: E1216 03:31:13.699075 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:31:13.700511 kubelet[3199]: E1216 03:31:13.700279 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmrnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pl4wm_calico-system(39c8b25e-ea78-443a-855e-e43746267826): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:31:13.701097 containerd[1851]: time="2025-12-16T03:31:13.700380100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:31:13.978133 containerd[1851]: time="2025-12-16T03:31:13.977656162Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:31:13.980356 containerd[1851]: time="2025-12-16T03:31:13.980290048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:31:13.980509 containerd[1851]: time="2025-12-16T03:31:13.980366848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:31:13.980837 kubelet[3199]: E1216 03:31:13.980736 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:31:13.980928 kubelet[3199]: E1216 03:31:13.980871 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:31:13.981707 containerd[1851]: time="2025-12-16T03:31:13.981425210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:31:13.981985 kubelet[3199]: E1216 03:31:13.981915 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvbg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-95jjv_calico-system(1598e409-2173-4fc3-8415-b507d5511623): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:31:13.983488 kubelet[3199]: E1216 03:31:13.983409 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-95jjv" podUID="1598e409-2173-4fc3-8415-b507d5511623" Dec 16 03:31:14.245643 containerd[1851]: time="2025-12-16T03:31:14.245431705Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:31:14.247848 containerd[1851]: time="2025-12-16T03:31:14.247643332Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:31:14.248026 containerd[1851]: time="2025-12-16T03:31:14.247870894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:31:14.248371 kubelet[3199]: E1216 03:31:14.248306 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:31:14.248466 kubelet[3199]: E1216 03:31:14.248393 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:31:14.249008 kubelet[3199]: E1216 03:31:14.248876 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmrnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pl4wm_calico-system(39c8b25e-ea78-443a-855e-e43746267826): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:31:14.250283 kubelet[3199]: E1216 03:31:14.250250 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:31:14.388628 containerd[1851]: time="2025-12-16T03:31:14.388524628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:31:14.653113 containerd[1851]: time="2025-12-16T03:31:14.652975274Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:31:14.655475 containerd[1851]: time="2025-12-16T03:31:14.655348257Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:31:14.655990 containerd[1851]: time="2025-12-16T03:31:14.655401899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:31:14.656257 kubelet[3199]: E1216 03:31:14.656157 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:31:14.657923 kubelet[3199]: E1216 03:31:14.656271 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:31:14.657923 kubelet[3199]: E1216 03:31:14.656468 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68pqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6b698bdfc8-x42pm_calico-system(69c31889-528a-4f2f-822c-6d89b94291a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:31:14.657923 kubelet[3199]: E1216 03:31:14.657650 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" podUID="69c31889-528a-4f2f-822c-6d89b94291a9" Dec 16 03:31:15.662104 systemd[1]: Started sshd@8-172.31.30.117:22-147.75.109.163:37856.service - OpenSSH per-connection server daemon (147.75.109.163:37856). Dec 16 03:31:15.669577 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:31:15.669699 kernel: audit: type=1130 audit(1765855875.661:776): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.30.117:22-147.75.109.163:37856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:15.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.30.117:22-147.75.109.163:37856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:15.855000 audit[5627]: USER_ACCT pid=5627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:15.862966 sshd[5627]: Accepted publickey for core from 147.75.109.163 port 37856 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:31:15.863347 kernel: audit: type=1101 audit(1765855875.855:777): pid=5627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:15.861000 audit[5627]: CRED_ACQ pid=5627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:15.864374 sshd-session[5627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:31:15.872987 kernel: audit: type=1103 audit(1765855875.861:778): pid=5627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:15.873100 kernel: audit: type=1006 audit(1765855875.861:779): pid=5627 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 03:31:15.882517 kernel: audit: type=1300 audit(1765855875.861:779): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3f578490 a2=3 a3=0 items=0 ppid=1 pid=5627 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:15.861000 audit[5627]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3f578490 a2=3 a3=0 items=0 ppid=1 pid=5627 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:15.878673 systemd-logind[1837]: New session 10 of user core. Dec 16 03:31:15.861000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:15.886212 kernel: audit: type=1327 audit(1765855875.861:779): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:15.889426 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 03:31:15.903404 kernel: audit: type=1105 audit(1765855875.894:780): pid=5627 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:15.894000 audit[5627]: USER_START pid=5627 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:15.903000 audit[5631]: CRED_ACQ pid=5631 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:15.913239 kernel: audit: type=1103 audit(1765855875.903:781): pid=5631 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:16.364223 sshd[5631]: Connection closed by 147.75.109.163 port 37856 Dec 16 03:31:16.365990 sshd-session[5627]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:16.368000 audit[5627]: USER_END pid=5627 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:16.377325 kernel: audit: type=1106 audit(1765855876.368:782): pid=5627 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:16.374411 systemd[1]: sshd@8-172.31.30.117:22-147.75.109.163:37856.service: Deactivated successfully. Dec 16 03:31:16.377596 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 03:31:16.388372 kernel: audit: type=1104 audit(1765855876.369:783): pid=5627 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:16.369000 audit[5627]: CRED_DISP pid=5627 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:16.383708 systemd-logind[1837]: Session 10 logged out. Waiting for processes to exit. Dec 16 03:31:16.387438 systemd-logind[1837]: Removed session 10. Dec 16 03:31:16.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.30.117:22-147.75.109.163:37856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:17.384637 containerd[1851]: time="2025-12-16T03:31:17.384580277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:31:17.746377 containerd[1851]: time="2025-12-16T03:31:17.746132657Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:31:17.750591 containerd[1851]: time="2025-12-16T03:31:17.750528223Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:31:17.750758 containerd[1851]: time="2025-12-16T03:31:17.750655620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:31:17.750924 kubelet[3199]: E1216 03:31:17.750875 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:31:17.751569 kubelet[3199]: E1216 03:31:17.750934 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:31:17.751569 kubelet[3199]: E1216 03:31:17.751077 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:df40755d990c4efb9d838aa3445cd5a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hzn98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9fd54f5-5npkd_calico-system(a7663fa2-6c23-44c9-b4f7-07ab48404e5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:31:17.753574 containerd[1851]: time="2025-12-16T03:31:17.753529531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:31:18.041106 containerd[1851]: time="2025-12-16T03:31:18.040235123Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:31:18.042661 containerd[1851]: time="2025-12-16T03:31:18.042509170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:31:18.042661 containerd[1851]: time="2025-12-16T03:31:18.042585246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:31:18.043492 kubelet[3199]: E1216 03:31:18.043436 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:31:18.043731 kubelet[3199]: E1216 03:31:18.043606 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:31:18.044370 kubelet[3199]: E1216 03:31:18.044282 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzn98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9fd54f5-5npkd_calico-system(a7663fa2-6c23-44c9-b4f7-07ab48404e5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:31:18.045686 kubelet[3199]: E1216 03:31:18.045625 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9fd54f5-5npkd" podUID="a7663fa2-6c23-44c9-b4f7-07ab48404e5b" Dec 16 03:31:21.403570 systemd[1]: Started sshd@9-172.31.30.117:22-147.75.109.163:37868.service - OpenSSH per-connection server daemon (147.75.109.163:37868). Dec 16 03:31:21.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.30.117:22-147.75.109.163:37868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:21.407852 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:31:21.407952 kernel: audit: type=1130 audit(1765855881.402:785): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.30.117:22-147.75.109.163:37868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:21.624000 audit[5644]: USER_ACCT pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:21.629475 sshd[5644]: Accepted publickey for core from 147.75.109.163 port 37868 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:31:21.635518 kernel: audit: type=1101 audit(1765855881.624:786): pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:21.635653 kernel: audit: type=1103 audit(1765855881.630:787): pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:21.630000 audit[5644]: CRED_ACQ pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:21.633660 sshd-session[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:31:21.631000 audit[5644]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbd923a60 a2=3 a3=0 items=0 ppid=1 pid=5644 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:21.644723 kernel: audit: type=1006 audit(1765855881.631:788): pid=5644 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 03:31:21.644931 kernel: audit: type=1300 audit(1765855881.631:788): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbd923a60 a2=3 a3=0 items=0 ppid=1 pid=5644 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:21.631000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:21.653362 kernel: audit: type=1327 audit(1765855881.631:788): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:21.658889 systemd-logind[1837]: New session 11 of user core. Dec 16 03:31:21.663458 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 03:31:21.669000 audit[5644]: USER_START pid=5644 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:21.679778 kernel: audit: type=1105 audit(1765855881.669:789): pid=5644 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:21.679874 kernel: audit: type=1103 audit(1765855881.677:790): pid=5648 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:21.677000 audit[5648]: CRED_ACQ pid=5648 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:21.859290 sshd[5648]: Connection closed by 147.75.109.163 port 37868 Dec 16 03:31:21.860038 sshd-session[5644]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:21.883000 audit[5644]: USER_END pid=5644 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:21.886000 audit[5644]: CRED_DISP pid=5644 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:21.892801 kernel: audit: type=1106 audit(1765855881.883:791): pid=5644 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:21.892905 kernel: audit: type=1104 audit(1765855881.886:792): pid=5644 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:21.903126 systemd[1]: sshd@9-172.31.30.117:22-147.75.109.163:37868.service: Deactivated successfully. Dec 16 03:31:21.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.30.117:22-147.75.109.163:37868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:21.908359 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 03:31:21.911406 systemd-logind[1837]: Session 11 logged out. Waiting for processes to exit. Dec 16 03:31:21.917603 systemd[1]: Started sshd@10-172.31.30.117:22-147.75.109.163:37872.service - OpenSSH per-connection server daemon (147.75.109.163:37872). Dec 16 03:31:21.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.30.117:22-147.75.109.163:37872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:21.919854 systemd-logind[1837]: Removed session 11. Dec 16 03:31:22.130000 audit[5661]: USER_ACCT pid=5661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:22.132706 sshd[5661]: Accepted publickey for core from 147.75.109.163 port 37872 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:31:22.133000 audit[5661]: CRED_ACQ pid=5661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:22.133000 audit[5661]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd395bd090 a2=3 a3=0 items=0 ppid=1 pid=5661 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:22.133000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:22.136388 sshd-session[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:31:22.146423 systemd-logind[1837]: New session 12 of user core. Dec 16 03:31:22.152490 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 03:31:22.156000 audit[5661]: USER_START pid=5661 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:22.158000 audit[5665]: CRED_ACQ pid=5665 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:22.359984 sshd[5665]: Connection closed by 147.75.109.163 port 37872 Dec 16 03:31:22.362384 sshd-session[5661]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:22.366000 audit[5661]: USER_END pid=5661 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:22.366000 audit[5661]: CRED_DISP pid=5661 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:22.372593 systemd-logind[1837]: Session 12 logged out. Waiting for processes to exit. Dec 16 03:31:22.373483 systemd[1]: sshd@10-172.31.30.117:22-147.75.109.163:37872.service: Deactivated successfully. Dec 16 03:31:22.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.30.117:22-147.75.109.163:37872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:22.380279 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 03:31:22.389837 containerd[1851]: time="2025-12-16T03:31:22.388515789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:31:22.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.30.117:22-147.75.109.163:48804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:22.416296 systemd[1]: Started sshd@11-172.31.30.117:22-147.75.109.163:48804.service - OpenSSH per-connection server daemon (147.75.109.163:48804). Dec 16 03:31:22.420593 systemd-logind[1837]: Removed session 12. Dec 16 03:31:22.629000 audit[5675]: USER_ACCT pid=5675 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:22.630850 sshd[5675]: Accepted publickey for core from 147.75.109.163 port 48804 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:31:22.632000 audit[5675]: CRED_ACQ pid=5675 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:22.632000 audit[5675]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9c6b2870 a2=3 a3=0 items=0 ppid=1 pid=5675 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:22.632000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:22.634679 sshd-session[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:31:22.646234 systemd-logind[1837]: New session 13 of user core. Dec 16 03:31:22.653852 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 03:31:22.659000 audit[5675]: USER_START pid=5675 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:22.660981 containerd[1851]: time="2025-12-16T03:31:22.660710479Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:31:22.663329 containerd[1851]: time="2025-12-16T03:31:22.663234575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:31:22.662000 audit[5679]: CRED_ACQ pid=5679 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:22.664712 containerd[1851]: time="2025-12-16T03:31:22.663283751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:31:22.664784 kubelet[3199]: E1216 03:31:22.664287 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:31:22.664784 kubelet[3199]: E1216 03:31:22.664337 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:31:22.664784 kubelet[3199]: E1216 03:31:22.664494 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k88lr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f46b988d5-dv5td_calico-apiserver(0feff437-5720-498b-a3c1-fbae9f5f245c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:31:22.666199 kubelet[3199]: E1216 03:31:22.665716 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" podUID="0feff437-5720-498b-a3c1-fbae9f5f245c" Dec 16 03:31:22.836289 sshd[5679]: Connection closed by 147.75.109.163 port 48804 Dec 16 03:31:22.837006 sshd-session[5675]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:22.840000 audit[5675]: USER_END pid=5675 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:22.840000 audit[5675]: CRED_DISP pid=5675 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:22.845057 systemd[1]: sshd@11-172.31.30.117:22-147.75.109.163:48804.service: Deactivated successfully. Dec 16 03:31:22.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.30.117:22-147.75.109.163:48804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:22.850476 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 03:31:22.852237 systemd-logind[1837]: Session 13 logged out. Waiting for processes to exit. Dec 16 03:31:22.856564 systemd-logind[1837]: Removed session 13. Dec 16 03:31:25.381668 kubelet[3199]: E1216 03:31:25.381473 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:31:26.382201 kubelet[3199]: E1216 03:31:26.381445 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" podUID="69c31889-528a-4f2f-822c-6d89b94291a9" Dec 16 03:31:27.384510 containerd[1851]: time="2025-12-16T03:31:27.384421704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:31:27.384953 kubelet[3199]: E1216 03:31:27.384836 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-95jjv" podUID="1598e409-2173-4fc3-8415-b507d5511623" Dec 16 03:31:27.677473 containerd[1851]: time="2025-12-16T03:31:27.677311755Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:31:27.680135 containerd[1851]: time="2025-12-16T03:31:27.679932432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:31:27.680840 kubelet[3199]: E1216 03:31:27.680778 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:31:27.685857 kubelet[3199]: E1216 03:31:27.681008 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:31:27.685857 kubelet[3199]: E1216 03:31:27.685361 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdx4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f46b988d5-xtbpb_calico-apiserver(e32d83e7-8260-42ce-a13a-b8e2a7a65181): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:31:27.690216 kubelet[3199]: E1216 03:31:27.687298 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" podUID="e32d83e7-8260-42ce-a13a-b8e2a7a65181" Dec 16 03:31:27.730743 containerd[1851]: time="2025-12-16T03:31:27.730573518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:31:27.882205 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 03:31:27.882339 kernel: audit: type=1130 audit(1765855887.874:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.30.117:22-147.75.109.163:48814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:27.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.30.117:22-147.75.109.163:48814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:27.875580 systemd[1]: Started sshd@12-172.31.30.117:22-147.75.109.163:48814.service - OpenSSH per-connection server daemon (147.75.109.163:48814). Dec 16 03:31:28.067000 audit[5724]: USER_ACCT pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:28.071903 sshd-session[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:31:28.074869 sshd[5724]: Accepted publickey for core from 147.75.109.163 port 48814 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:31:28.075757 kernel: audit: type=1101 audit(1765855888.067:813): pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:28.069000 audit[5724]: CRED_ACQ pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:28.092894 kernel: audit: type=1103 audit(1765855888.069:814): pid=5724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:28.092383 systemd-logind[1837]: New session 14 of user core. Dec 16 03:31:28.098253 kernel: audit: type=1006 audit(1765855888.069:815): pid=5724 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 03:31:28.097253 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 03:31:28.069000 audit[5724]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda37dbf90 a2=3 a3=0 items=0 ppid=1 pid=5724 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:28.109340 kernel: audit: type=1300 audit(1765855888.069:815): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda37dbf90 a2=3 a3=0 items=0 ppid=1 pid=5724 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:28.069000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:28.115364 kernel: audit: type=1327 audit(1765855888.069:815): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:28.110000 audit[5724]: USER_START pid=5724 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:28.114000 audit[5728]: CRED_ACQ pid=5728 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:28.124061 kernel: audit: type=1105 audit(1765855888.110:816): pid=5724 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:28.124199 kernel: audit: type=1103 audit(1765855888.114:817): pid=5728 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:28.254922 sshd[5728]: Connection closed by 147.75.109.163 port 48814 Dec 16 03:31:28.257216 sshd-session[5724]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:28.266893 kernel: audit: type=1106 audit(1765855888.258:818): pid=5724 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:28.258000 audit[5724]: USER_END pid=5724 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:28.262242 systemd-logind[1837]: Session 14 logged out. Waiting for processes to exit. Dec 16 03:31:28.265088 systemd[1]: sshd@12-172.31.30.117:22-147.75.109.163:48814.service: Deactivated successfully. Dec 16 03:31:28.268315 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 03:31:28.258000 audit[5724]: CRED_DISP pid=5724 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:28.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.30.117:22-147.75.109.163:48814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:28.272128 systemd-logind[1837]: Removed session 14. Dec 16 03:31:28.278441 kernel: audit: type=1104 audit(1765855888.258:819): pid=5724 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:30.379955 kubelet[3199]: E1216 03:31:30.379901 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9fd54f5-5npkd" podUID="a7663fa2-6c23-44c9-b4f7-07ab48404e5b" Dec 16 03:31:33.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.30.117:22-147.75.109.163:50324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:33.295700 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:31:33.295943 kernel: audit: type=1130 audit(1765855893.293:821): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.30.117:22-147.75.109.163:50324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:33.294753 systemd[1]: Started sshd@13-172.31.30.117:22-147.75.109.163:50324.service - OpenSSH per-connection server daemon (147.75.109.163:50324). Dec 16 03:31:33.457524 sshd[5741]: Accepted publickey for core from 147.75.109.163 port 50324 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:31:33.465204 kernel: audit: type=1101 audit(1765855893.456:822): pid=5741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:33.456000 audit[5741]: USER_ACCT pid=5741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:33.465895 sshd-session[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:31:33.462000 audit[5741]: CRED_ACQ pid=5741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:33.477058 kernel: audit: type=1103 audit(1765855893.462:823): pid=5741 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:33.477613 kernel: audit: type=1006 audit(1765855893.463:824): pid=5741 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 03:31:33.488310 kernel: audit: type=1300 audit(1765855893.463:824): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff20b0afb0 a2=3 a3=0 items=0 ppid=1 pid=5741 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:33.463000 audit[5741]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff20b0afb0 a2=3 a3=0 items=0 ppid=1 pid=5741 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:33.493287 kernel: audit: type=1327 audit(1765855893.463:824): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:33.463000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:33.497815 systemd-logind[1837]: New session 15 of user core. Dec 16 03:31:33.502435 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 03:31:33.509000 audit[5741]: USER_START pid=5741 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:33.518546 kernel: audit: type=1105 audit(1765855893.509:825): pid=5741 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:33.519000 audit[5745]: CRED_ACQ pid=5745 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:33.527196 kernel: audit: type=1103 audit(1765855893.519:826): pid=5745 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:33.812112 sshd[5745]: Connection closed by 147.75.109.163 port 50324 Dec 16 03:31:33.818376 sshd-session[5741]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:33.825000 audit[5741]: USER_END pid=5741 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:33.831279 systemd[1]: sshd@13-172.31.30.117:22-147.75.109.163:50324.service: Deactivated successfully. Dec 16 03:31:33.834324 kernel: audit: type=1106 audit(1765855893.825:827): pid=5741 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:33.826000 audit[5741]: CRED_DISP pid=5741 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:33.838570 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 03:31:33.843525 kernel: audit: type=1104 audit(1765855893.826:828): pid=5741 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:33.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.30.117:22-147.75.109.163:50324 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:33.845447 systemd-logind[1837]: Session 15 logged out. Waiting for processes to exit. Dec 16 03:31:33.847932 systemd-logind[1837]: Removed session 15. Dec 16 03:31:36.380621 kubelet[3199]: E1216 03:31:36.380568 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" podUID="0feff437-5720-498b-a3c1-fbae9f5f245c" Dec 16 03:31:37.386989 kubelet[3199]: E1216 03:31:37.386940 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:31:38.382194 kubelet[3199]: E1216 03:31:38.381770 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" podUID="69c31889-528a-4f2f-822c-6d89b94291a9" Dec 16 03:31:38.382799 kubelet[3199]: E1216 03:31:38.382763 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" podUID="e32d83e7-8260-42ce-a13a-b8e2a7a65181" Dec 16 03:31:38.850571 systemd[1]: Started sshd@14-172.31.30.117:22-147.75.109.163:50338.service - OpenSSH per-connection server daemon (147.75.109.163:50338). Dec 16 03:31:38.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.30.117:22-147.75.109.163:50338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:38.851741 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:31:38.851823 kernel: audit: type=1130 audit(1765855898.849:830): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.30.117:22-147.75.109.163:50338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:39.090000 audit[5759]: USER_ACCT pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:39.097800 sshd[5759]: Accepted publickey for core from 147.75.109.163 port 50338 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:31:39.098299 kernel: audit: type=1101 audit(1765855899.090:831): pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:39.098000 audit[5759]: CRED_ACQ pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:39.107769 kernel: audit: type=1103 audit(1765855899.098:832): pid=5759 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:39.107896 kernel: audit: type=1006 audit(1765855899.098:833): pid=5759 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 03:31:39.110725 sshd-session[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:31:39.098000 audit[5759]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff444ce8f0 a2=3 a3=0 items=0 ppid=1 pid=5759 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:39.118197 kernel: audit: type=1300 audit(1765855899.098:833): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff444ce8f0 a2=3 a3=0 items=0 ppid=1 pid=5759 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:39.098000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:39.122192 kernel: audit: type=1327 audit(1765855899.098:833): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:39.131507 systemd-logind[1837]: New session 16 of user core. Dec 16 03:31:39.136474 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 03:31:39.142000 audit[5759]: USER_START pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:39.151194 kernel: audit: type=1105 audit(1765855899.142:834): pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:39.152000 audit[5763]: CRED_ACQ pid=5763 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:39.159200 kernel: audit: type=1103 audit(1765855899.152:835): pid=5763 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:39.427821 sshd[5763]: Connection closed by 147.75.109.163 port 50338 Dec 16 03:31:39.429408 sshd-session[5759]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:39.430000 audit[5759]: USER_END pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:39.441201 kernel: audit: type=1106 audit(1765855899.430:836): pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:39.444567 systemd[1]: sshd@14-172.31.30.117:22-147.75.109.163:50338.service: Deactivated successfully. Dec 16 03:31:39.447947 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 03:31:39.451790 systemd-logind[1837]: Session 16 logged out. Waiting for processes to exit. Dec 16 03:31:39.431000 audit[5759]: CRED_DISP pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:39.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.30.117:22-147.75.109.163:50338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:39.459333 systemd-logind[1837]: Removed session 16. Dec 16 03:31:39.460264 kernel: audit: type=1104 audit(1765855899.431:837): pid=5759 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:40.380268 kubelet[3199]: E1216 03:31:40.379790 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-95jjv" podUID="1598e409-2173-4fc3-8415-b507d5511623" Dec 16 03:31:44.384631 kubelet[3199]: E1216 03:31:44.384574 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9fd54f5-5npkd" podUID="a7663fa2-6c23-44c9-b4f7-07ab48404e5b" Dec 16 03:31:44.464661 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:31:44.464800 kernel: audit: type=1130 audit(1765855904.462:839): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.30.117:22-147.75.109.163:36796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:44.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.30.117:22-147.75.109.163:36796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:44.463705 systemd[1]: Started sshd@15-172.31.30.117:22-147.75.109.163:36796.service - OpenSSH per-connection server daemon (147.75.109.163:36796). Dec 16 03:31:44.639000 audit[5778]: USER_ACCT pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:44.644703 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:31:44.646740 sshd[5778]: Accepted publickey for core from 147.75.109.163 port 36796 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:31:44.648374 kernel: audit: type=1101 audit(1765855904.639:840): pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:44.639000 audit[5778]: CRED_ACQ pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:44.656234 kernel: audit: type=1103 audit(1765855904.639:841): pid=5778 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:44.656338 kernel: audit: type=1006 audit(1765855904.639:842): pid=5778 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 03:31:44.639000 audit[5778]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe67bb2b40 a2=3 a3=0 items=0 ppid=1 pid=5778 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:44.661266 kernel: audit: type=1300 audit(1765855904.639:842): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe67bb2b40 a2=3 a3=0 items=0 ppid=1 pid=5778 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:44.639000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:44.665660 systemd-logind[1837]: New session 17 of user core. Dec 16 03:31:44.666855 kernel: audit: type=1327 audit(1765855904.639:842): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:44.668557 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 03:31:44.672000 audit[5778]: USER_START pid=5778 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:44.675000 audit[5782]: CRED_ACQ pid=5782 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:44.682061 kernel: audit: type=1105 audit(1765855904.672:843): pid=5778 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:44.682239 kernel: audit: type=1103 audit(1765855904.675:844): pid=5782 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:44.859289 sshd[5782]: Connection closed by 147.75.109.163 port 36796 Dec 16 03:31:44.860454 sshd-session[5778]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:44.863000 audit[5778]: USER_END pid=5778 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:44.873284 kernel: audit: type=1106 audit(1765855904.863:845): pid=5778 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:44.863000 audit[5778]: CRED_DISP pid=5778 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:44.878722 systemd[1]: sshd@15-172.31.30.117:22-147.75.109.163:36796.service: Deactivated successfully. Dec 16 03:31:44.881192 kernel: audit: type=1104 audit(1765855904.863:846): pid=5778 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:44.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.30.117:22-147.75.109.163:36796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:44.883882 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 03:31:44.886019 systemd-logind[1837]: Session 17 logged out. Waiting for processes to exit. Dec 16 03:31:44.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.30.117:22-147.75.109.163:36804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:44.907088 systemd[1]: Started sshd@16-172.31.30.117:22-147.75.109.163:36804.service - OpenSSH per-connection server daemon (147.75.109.163:36804). Dec 16 03:31:44.909768 systemd-logind[1837]: Removed session 17. Dec 16 03:31:45.087000 audit[5795]: USER_ACCT pid=5795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:45.090000 audit[5795]: CRED_ACQ pid=5795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:45.092376 sshd[5795]: Accepted publickey for core from 147.75.109.163 port 36804 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:31:45.091000 audit[5795]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff49425980 a2=3 a3=0 items=0 ppid=1 pid=5795 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:45.091000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:45.095553 sshd-session[5795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:31:45.107484 systemd-logind[1837]: New session 18 of user core. Dec 16 03:31:45.115485 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 03:31:45.119000 audit[5795]: USER_START pid=5795 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:45.123000 audit[5799]: CRED_ACQ pid=5799 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:46.320319 sshd[5799]: Connection closed by 147.75.109.163 port 36804 Dec 16 03:31:46.320913 sshd-session[5795]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:46.321000 audit[5795]: USER_END pid=5795 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:46.322000 audit[5795]: CRED_DISP pid=5795 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:46.330669 systemd[1]: sshd@16-172.31.30.117:22-147.75.109.163:36804.service: Deactivated successfully. Dec 16 03:31:46.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.30.117:22-147.75.109.163:36804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:46.333952 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 03:31:46.335137 systemd-logind[1837]: Session 18 logged out. Waiting for processes to exit. Dec 16 03:31:46.338540 systemd-logind[1837]: Removed session 18. Dec 16 03:31:46.353339 systemd[1]: Started sshd@17-172.31.30.117:22-147.75.109.163:36816.service - OpenSSH per-connection server daemon (147.75.109.163:36816). Dec 16 03:31:46.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.30.117:22-147.75.109.163:36816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:46.547000 audit[5809]: USER_ACCT pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:46.548958 sshd[5809]: Accepted publickey for core from 147.75.109.163 port 36816 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:31:46.549000 audit[5809]: CRED_ACQ pid=5809 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:46.549000 audit[5809]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea32507e0 a2=3 a3=0 items=0 ppid=1 pid=5809 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:46.549000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:46.552794 sshd-session[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:31:46.564405 systemd-logind[1837]: New session 19 of user core. Dec 16 03:31:46.573727 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 03:31:46.579000 audit[5809]: USER_START pid=5809 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:46.582000 audit[5813]: CRED_ACQ pid=5813 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:47.382212 kubelet[3199]: E1216 03:31:47.381424 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" podUID="0feff437-5720-498b-a3c1-fbae9f5f245c" Dec 16 03:31:47.638273 sshd[5813]: Connection closed by 147.75.109.163 port 36816 Dec 16 03:31:47.639575 sshd-session[5809]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:47.642000 audit[5809]: USER_END pid=5809 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:47.642000 audit[5809]: CRED_DISP pid=5809 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:47.647985 systemd[1]: sshd@17-172.31.30.117:22-147.75.109.163:36816.service: Deactivated successfully. Dec 16 03:31:47.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.30.117:22-147.75.109.163:36816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:47.653189 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 03:31:47.655802 systemd-logind[1837]: Session 19 logged out. Waiting for processes to exit. Dec 16 03:31:47.679855 systemd-logind[1837]: Removed session 19. Dec 16 03:31:47.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.30.117:22-147.75.109.163:36826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:47.681596 systemd[1]: Started sshd@18-172.31.30.117:22-147.75.109.163:36826.service - OpenSSH per-connection server daemon (147.75.109.163:36826). Dec 16 03:31:47.742000 audit[5829]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5829 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:31:47.742000 audit[5829]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff8a5b38e0 a2=0 a3=7fff8a5b38cc items=0 ppid=3301 pid=5829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:47.742000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:31:47.746000 audit[5829]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5829 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:31:47.746000 audit[5829]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff8a5b38e0 a2=0 a3=0 items=0 ppid=3301 pid=5829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:47.746000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:31:47.765000 audit[5832]: NETFILTER_CFG table=filter:150 family=2 entries=38 op=nft_register_rule pid=5832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:31:47.765000 audit[5832]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffef28b8ec0 a2=0 a3=7ffef28b8eac items=0 ppid=3301 pid=5832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:47.765000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:31:47.771000 audit[5832]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=5832 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:31:47.771000 audit[5832]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffef28b8ec0 a2=0 a3=0 items=0 ppid=3301 pid=5832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:47.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:31:47.882000 audit[5827]: USER_ACCT pid=5827 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:47.884872 sshd[5827]: Accepted publickey for core from 147.75.109.163 port 36826 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:31:47.884000 audit[5827]: CRED_ACQ pid=5827 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:47.884000 audit[5827]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf9ed45a0 a2=3 a3=0 items=0 ppid=1 pid=5827 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:47.884000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:47.887078 sshd-session[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:31:47.897728 systemd-logind[1837]: New session 20 of user core. Dec 16 03:31:47.904510 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 03:31:47.910000 audit[5827]: USER_START pid=5827 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:47.913000 audit[5834]: CRED_ACQ pid=5834 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:48.413319 sshd[5834]: Connection closed by 147.75.109.163 port 36826 Dec 16 03:31:48.416727 sshd-session[5827]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:48.419000 audit[5827]: USER_END pid=5827 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:48.419000 audit[5827]: CRED_DISP pid=5827 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:48.424473 systemd[1]: sshd@18-172.31.30.117:22-147.75.109.163:36826.service: Deactivated successfully. Dec 16 03:31:48.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.30.117:22-147.75.109.163:36826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:48.429941 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 03:31:48.436489 systemd-logind[1837]: Session 20 logged out. Waiting for processes to exit. Dec 16 03:31:48.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.30.117:22-147.75.109.163:36830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:48.450879 systemd[1]: Started sshd@19-172.31.30.117:22-147.75.109.163:36830.service - OpenSSH per-connection server daemon (147.75.109.163:36830). Dec 16 03:31:48.455547 systemd-logind[1837]: Removed session 20. Dec 16 03:31:48.635000 audit[5844]: USER_ACCT pid=5844 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:48.637133 sshd[5844]: Accepted publickey for core from 147.75.109.163 port 36830 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:31:48.636000 audit[5844]: CRED_ACQ pid=5844 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:48.636000 audit[5844]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed2deee80 a2=3 a3=0 items=0 ppid=1 pid=5844 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:48.636000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:48.639132 sshd-session[5844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:31:48.650341 systemd-logind[1837]: New session 21 of user core. Dec 16 03:31:48.657460 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 03:31:48.662000 audit[5844]: USER_START pid=5844 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:48.666000 audit[5848]: CRED_ACQ pid=5848 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:48.928444 sshd[5848]: Connection closed by 147.75.109.163 port 36830 Dec 16 03:31:48.926712 sshd-session[5844]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:48.929000 audit[5844]: USER_END pid=5844 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:48.929000 audit[5844]: CRED_DISP pid=5844 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:48.934097 systemd-logind[1837]: Session 21 logged out. Waiting for processes to exit. Dec 16 03:31:48.937253 systemd[1]: sshd@19-172.31.30.117:22-147.75.109.163:36830.service: Deactivated successfully. Dec 16 03:31:48.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.30.117:22-147.75.109.163:36830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:48.940362 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 03:31:48.946271 systemd-logind[1837]: Removed session 21. Dec 16 03:31:49.382626 kubelet[3199]: E1216 03:31:49.382268 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" podUID="e32d83e7-8260-42ce-a13a-b8e2a7a65181" Dec 16 03:31:51.400396 kubelet[3199]: E1216 03:31:51.400255 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:31:52.384241 kubelet[3199]: E1216 03:31:52.382697 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-95jjv" podUID="1598e409-2173-4fc3-8415-b507d5511623" Dec 16 03:31:53.380438 kubelet[3199]: E1216 03:31:53.379652 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" podUID="69c31889-528a-4f2f-822c-6d89b94291a9" Dec 16 03:31:53.490910 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 03:31:53.491050 kernel: audit: type=1325 audit(1765855913.484:888): table=filter:152 family=2 entries=26 op=nft_register_rule pid=5860 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:31:53.484000 audit[5860]: NETFILTER_CFG table=filter:152 family=2 entries=26 op=nft_register_rule pid=5860 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:31:53.484000 audit[5860]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffee5819040 a2=0 a3=7ffee581902c items=0 ppid=3301 pid=5860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:53.499204 kernel: audit: type=1300 audit(1765855913.484:888): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffee5819040 a2=0 a3=7ffee581902c items=0 ppid=3301 pid=5860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:53.484000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:31:53.504189 kernel: audit: type=1327 audit(1765855913.484:888): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:31:53.506000 audit[5860]: NETFILTER_CFG table=nat:153 family=2 entries=104 op=nft_register_chain pid=5860 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:31:53.506000 audit[5860]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffee5819040 a2=0 a3=7ffee581902c items=0 ppid=3301 pid=5860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:53.513934 kernel: audit: type=1325 audit(1765855913.506:889): table=nat:153 family=2 entries=104 op=nft_register_chain pid=5860 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:31:53.514040 kernel: audit: type=1300 audit(1765855913.506:889): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffee5819040 a2=0 a3=7ffee581902c items=0 ppid=3301 pid=5860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:53.506000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:31:53.523192 kernel: audit: type=1327 audit(1765855913.506:889): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:31:54.000630 kernel: audit: type=1130 audit(1765855913.992:890): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.30.117:22-147.75.109.163:58952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:53.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.30.117:22-147.75.109.163:58952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:53.993968 systemd[1]: Started sshd@20-172.31.30.117:22-147.75.109.163:58952.service - OpenSSH per-connection server daemon (147.75.109.163:58952). Dec 16 03:31:54.181000 audit[5862]: USER_ACCT pid=5862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:54.184120 sshd[5862]: Accepted publickey for core from 147.75.109.163 port 58952 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:31:54.186747 sshd-session[5862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:31:54.184000 audit[5862]: CRED_ACQ pid=5862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:54.191112 kernel: audit: type=1101 audit(1765855914.181:891): pid=5862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:54.191240 kernel: audit: type=1103 audit(1765855914.184:892): pid=5862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:54.197467 systemd-logind[1837]: New session 22 of user core. Dec 16 03:31:54.199539 kernel: audit: type=1006 audit(1765855914.184:893): pid=5862 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 03:31:54.184000 audit[5862]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedf12a160 a2=3 a3=0 items=0 ppid=1 pid=5862 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:54.184000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:54.204470 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 03:31:54.210000 audit[5862]: USER_START pid=5862 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:54.213000 audit[5866]: CRED_ACQ pid=5866 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:54.370854 sshd[5866]: Connection closed by 147.75.109.163 port 58952 Dec 16 03:31:54.371862 sshd-session[5862]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:54.375000 audit[5862]: USER_END pid=5862 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:54.375000 audit[5862]: CRED_DISP pid=5862 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:54.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.30.117:22-147.75.109.163:58952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:54.384368 systemd[1]: sshd@20-172.31.30.117:22-147.75.109.163:58952.service: Deactivated successfully. Dec 16 03:31:54.387704 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 03:31:54.390748 systemd-logind[1837]: Session 22 logged out. Waiting for processes to exit. Dec 16 03:31:54.395666 systemd-logind[1837]: Removed session 22. Dec 16 03:31:56.381862 kubelet[3199]: E1216 03:31:56.381796 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9fd54f5-5npkd" podUID="a7663fa2-6c23-44c9-b4f7-07ab48404e5b" Dec 16 03:31:58.381260 kubelet[3199]: E1216 03:31:58.381203 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" podUID="0feff437-5720-498b-a3c1-fbae9f5f245c" Dec 16 03:31:59.419192 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 03:31:59.419329 kernel: audit: type=1130 audit(1765855919.410:899): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.30.117:22-147.75.109.163:58962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:59.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.30.117:22-147.75.109.163:58962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:59.410360 systemd[1]: Started sshd@21-172.31.30.117:22-147.75.109.163:58962.service - OpenSSH per-connection server daemon (147.75.109.163:58962). Dec 16 03:31:59.634000 audit[5909]: USER_ACCT pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:59.641222 sshd-session[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:31:59.642137 sshd[5909]: Accepted publickey for core from 147.75.109.163 port 58962 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:31:59.642363 kernel: audit: type=1101 audit(1765855919.634:900): pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:59.638000 audit[5909]: CRED_ACQ pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:59.651645 kernel: audit: type=1103 audit(1765855919.638:901): pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:59.638000 audit[5909]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd422c280 a2=3 a3=0 items=0 ppid=1 pid=5909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:59.657294 systemd-logind[1837]: New session 23 of user core. Dec 16 03:31:59.662308 kernel: audit: type=1006 audit(1765855919.638:902): pid=5909 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 03:31:59.662433 kernel: audit: type=1300 audit(1765855919.638:902): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd422c280 a2=3 a3=0 items=0 ppid=1 pid=5909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:31:59.638000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:59.665777 kernel: audit: type=1327 audit(1765855919.638:902): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:31:59.667273 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 03:31:59.675000 audit[5909]: USER_START pid=5909 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:59.685258 kernel: audit: type=1105 audit(1765855919.675:903): pid=5909 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:59.686000 audit[5913]: CRED_ACQ pid=5913 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:59.694193 kernel: audit: type=1103 audit(1765855919.686:904): pid=5913 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:59.892506 sshd[5913]: Connection closed by 147.75.109.163 port 58962 Dec 16 03:31:59.894424 sshd-session[5909]: pam_unix(sshd:session): session closed for user core Dec 16 03:31:59.904855 kernel: audit: type=1106 audit(1765855919.895:905): pid=5909 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:59.895000 audit[5909]: USER_END pid=5909 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:59.902000 audit[5909]: CRED_DISP pid=5909 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:59.913241 kernel: audit: type=1104 audit(1765855919.902:906): pid=5909 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:31:59.916409 systemd[1]: sshd@21-172.31.30.117:22-147.75.109.163:58962.service: Deactivated successfully. Dec 16 03:31:59.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.30.117:22-147.75.109.163:58962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:31:59.920691 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 03:31:59.924021 systemd-logind[1837]: Session 23 logged out. Waiting for processes to exit. Dec 16 03:31:59.926106 systemd-logind[1837]: Removed session 23. Dec 16 03:32:00.404383 kubelet[3199]: E1216 03:32:00.404332 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" podUID="e32d83e7-8260-42ce-a13a-b8e2a7a65181" Dec 16 03:32:03.408737 containerd[1851]: time="2025-12-16T03:32:03.381308799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:32:03.680496 containerd[1851]: time="2025-12-16T03:32:03.680079945Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:32:03.683078 containerd[1851]: time="2025-12-16T03:32:03.683012636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:32:03.683261 containerd[1851]: time="2025-12-16T03:32:03.683140894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:32:03.683739 kubelet[3199]: E1216 03:32:03.683607 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:32:03.684167 kubelet[3199]: E1216 03:32:03.683757 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:32:03.709675 kubelet[3199]: E1216 03:32:03.684107 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvbg9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-95jjv_calico-system(1598e409-2173-4fc3-8415-b507d5511623): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:32:03.711036 kubelet[3199]: E1216 03:32:03.710944 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-95jjv" podUID="1598e409-2173-4fc3-8415-b507d5511623" Dec 16 03:32:04.927959 systemd[1]: Started sshd@22-172.31.30.117:22-147.75.109.163:43814.service - OpenSSH per-connection server daemon (147.75.109.163:43814). Dec 16 03:32:04.931229 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:32:04.931777 kernel: audit: type=1130 audit(1765855924.926:908): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.30.117:22-147.75.109.163:43814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:32:04.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.30.117:22-147.75.109.163:43814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:32:05.117000 audit[5935]: USER_ACCT pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:05.123490 sshd[5935]: Accepted publickey for core from 147.75.109.163 port 43814 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:32:05.125209 kernel: audit: type=1101 audit(1765855925.117:909): pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:05.128022 sshd-session[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:05.125000 audit[5935]: CRED_ACQ pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:05.135725 kernel: audit: type=1103 audit(1765855925.125:910): pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:05.135827 kernel: audit: type=1006 audit(1765855925.125:911): pid=5935 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 03:32:05.125000 audit[5935]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee9f2dcd0 a2=3 a3=0 items=0 ppid=1 pid=5935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:05.145348 kernel: audit: type=1300 audit(1765855925.125:911): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee9f2dcd0 a2=3 a3=0 items=0 ppid=1 pid=5935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:05.147973 kernel: audit: type=1327 audit(1765855925.125:911): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:32:05.125000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:32:05.151247 systemd-logind[1837]: New session 24 of user core. Dec 16 03:32:05.152444 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 03:32:05.159000 audit[5935]: USER_START pid=5935 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:05.168230 kernel: audit: type=1105 audit(1765855925.159:912): pid=5935 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:05.168000 audit[5939]: CRED_ACQ pid=5939 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:05.176216 kernel: audit: type=1103 audit(1765855925.168:913): pid=5939 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:05.367101 sshd[5939]: Connection closed by 147.75.109.163 port 43814 Dec 16 03:32:05.370013 sshd-session[5935]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:05.383000 audit[5935]: USER_END pid=5935 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:05.392976 kernel: audit: type=1106 audit(1765855925.383:914): pid=5935 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:05.393642 systemd[1]: sshd@22-172.31.30.117:22-147.75.109.163:43814.service: Deactivated successfully. Dec 16 03:32:05.395424 containerd[1851]: time="2025-12-16T03:32:05.393776436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:32:05.400049 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 03:32:05.383000 audit[5935]: CRED_DISP pid=5935 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:05.405323 systemd-logind[1837]: Session 24 logged out. Waiting for processes to exit. Dec 16 03:32:05.409264 kernel: audit: type=1104 audit(1765855925.383:915): pid=5935 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:05.409692 systemd-logind[1837]: Removed session 24. Dec 16 03:32:05.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.30.117:22-147.75.109.163:43814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:32:05.687910 containerd[1851]: time="2025-12-16T03:32:05.687859374Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:32:05.690394 containerd[1851]: time="2025-12-16T03:32:05.690334389Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:32:05.690672 containerd[1851]: time="2025-12-16T03:32:05.690368555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:32:05.690889 kubelet[3199]: E1216 03:32:05.690837 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:32:05.691349 kubelet[3199]: E1216 03:32:05.690899 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:32:05.691468 kubelet[3199]: E1216 03:32:05.691406 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmrnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pl4wm_calico-system(39c8b25e-ea78-443a-855e-e43746267826): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:32:05.695417 containerd[1851]: time="2025-12-16T03:32:05.695365407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:32:05.956843 containerd[1851]: time="2025-12-16T03:32:05.956596078Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:32:05.958842 containerd[1851]: time="2025-12-16T03:32:05.958691600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:32:05.958842 containerd[1851]: time="2025-12-16T03:32:05.958807134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:32:05.959445 kubelet[3199]: E1216 03:32:05.959292 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:32:05.959725 kubelet[3199]: E1216 03:32:05.959698 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:32:05.960066 kubelet[3199]: E1216 03:32:05.959986 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zmrnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pl4wm_calico-system(39c8b25e-ea78-443a-855e-e43746267826): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:32:05.961337 kubelet[3199]: E1216 03:32:05.961256 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:32:08.381724 containerd[1851]: time="2025-12-16T03:32:08.381146181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:32:08.680928 containerd[1851]: time="2025-12-16T03:32:08.679698371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:32:08.682315 containerd[1851]: time="2025-12-16T03:32:08.682151534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:32:08.682315 containerd[1851]: time="2025-12-16T03:32:08.682279786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:32:08.683337 kubelet[3199]: E1216 03:32:08.682631 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:32:08.683337 kubelet[3199]: E1216 03:32:08.682690 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:32:08.683337 kubelet[3199]: E1216 03:32:08.682845 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68pqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6b698bdfc8-x42pm_calico-system(69c31889-528a-4f2f-822c-6d89b94291a9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:32:08.684088 kubelet[3199]: E1216 03:32:08.684045 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" podUID="69c31889-528a-4f2f-822c-6d89b94291a9" Dec 16 03:32:10.403049 systemd[1]: Started sshd@23-172.31.30.117:22-147.75.109.163:43818.service - OpenSSH per-connection server daemon (147.75.109.163:43818). Dec 16 03:32:10.405521 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:32:10.405588 kernel: audit: type=1130 audit(1765855930.402:917): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.30.117:22-147.75.109.163:43818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:32:10.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.30.117:22-147.75.109.163:43818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:32:10.661000 audit[5964]: USER_ACCT pid=5964 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:10.664285 sshd[5964]: Accepted publickey for core from 147.75.109.163 port 43818 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:32:10.666785 sshd-session[5964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:10.669207 kernel: audit: type=1101 audit(1765855930.661:918): pid=5964 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:10.664000 audit[5964]: CRED_ACQ pid=5964 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:10.676206 kernel: audit: type=1103 audit(1765855930.664:919): pid=5964 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:10.680202 kernel: audit: type=1006 audit(1765855930.664:920): pid=5964 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 03:32:10.664000 audit[5964]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd39760df0 a2=3 a3=0 items=0 ppid=1 pid=5964 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:10.685128 systemd-logind[1837]: New session 25 of user core. Dec 16 03:32:10.689908 kernel: audit: type=1300 audit(1765855930.664:920): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd39760df0 a2=3 a3=0 items=0 ppid=1 pid=5964 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:10.690030 kernel: audit: type=1327 audit(1765855930.664:920): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:32:10.664000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:32:10.694227 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 03:32:10.699000 audit[5964]: USER_START pid=5964 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:10.707225 kernel: audit: type=1105 audit(1765855930.699:921): pid=5964 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:10.708000 audit[5968]: CRED_ACQ pid=5968 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:10.715213 kernel: audit: type=1103 audit(1765855930.708:922): pid=5968 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:10.947928 sshd[5968]: Connection closed by 147.75.109.163 port 43818 Dec 16 03:32:10.948658 sshd-session[5964]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:10.950000 audit[5964]: USER_END pid=5964 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:10.957059 systemd[1]: sshd@23-172.31.30.117:22-147.75.109.163:43818.service: Deactivated successfully. Dec 16 03:32:10.960597 kernel: audit: type=1106 audit(1765855930.950:923): pid=5964 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:10.961887 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 03:32:10.950000 audit[5964]: CRED_DISP pid=5964 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:10.965049 systemd-logind[1837]: Session 25 logged out. Waiting for processes to exit. Dec 16 03:32:10.968375 systemd-logind[1837]: Removed session 25. Dec 16 03:32:10.969261 kernel: audit: type=1104 audit(1765855930.950:924): pid=5964 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:10.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.30.117:22-147.75.109.163:43818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:32:11.384159 containerd[1851]: time="2025-12-16T03:32:11.383825734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:32:11.653792 containerd[1851]: time="2025-12-16T03:32:11.653579309Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:32:11.655939 containerd[1851]: time="2025-12-16T03:32:11.655887522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:32:11.656107 containerd[1851]: time="2025-12-16T03:32:11.655992145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:32:11.656364 kubelet[3199]: E1216 03:32:11.656322 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:32:11.656782 kubelet[3199]: E1216 03:32:11.656375 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:32:11.656782 kubelet[3199]: E1216 03:32:11.656683 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdx4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f46b988d5-xtbpb_calico-apiserver(e32d83e7-8260-42ce-a13a-b8e2a7a65181): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:32:11.657474 containerd[1851]: time="2025-12-16T03:32:11.657390852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:32:11.657847 kubelet[3199]: E1216 03:32:11.657799 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" podUID="e32d83e7-8260-42ce-a13a-b8e2a7a65181" Dec 16 03:32:11.950189 containerd[1851]: time="2025-12-16T03:32:11.950024151Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:32:11.952245 containerd[1851]: time="2025-12-16T03:32:11.952111139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:32:11.952245 containerd[1851]: time="2025-12-16T03:32:11.952212237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:32:11.952669 kubelet[3199]: E1216 03:32:11.952590 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:32:11.952669 kubelet[3199]: E1216 03:32:11.952637 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:32:11.953397 kubelet[3199]: E1216 03:32:11.953068 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:df40755d990c4efb9d838aa3445cd5a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hzn98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9fd54f5-5npkd_calico-system(a7663fa2-6c23-44c9-b4f7-07ab48404e5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:32:11.954490 containerd[1851]: time="2025-12-16T03:32:11.954389787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:32:12.218300 containerd[1851]: time="2025-12-16T03:32:12.218164444Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:32:12.229220 containerd[1851]: time="2025-12-16T03:32:12.229115196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:32:12.229606 containerd[1851]: time="2025-12-16T03:32:12.229460624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:32:12.229900 kubelet[3199]: E1216 03:32:12.229840 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:32:12.229995 kubelet[3199]: E1216 03:32:12.229918 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:32:12.231345 kubelet[3199]: E1216 03:32:12.230678 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k88lr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f46b988d5-dv5td_calico-apiserver(0feff437-5720-498b-a3c1-fbae9f5f245c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:32:12.232506 containerd[1851]: time="2025-12-16T03:32:12.230795588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:32:12.232624 kubelet[3199]: E1216 03:32:12.232035 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" podUID="0feff437-5720-498b-a3c1-fbae9f5f245c" Dec 16 03:32:12.548849 containerd[1851]: time="2025-12-16T03:32:12.548561485Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:32:12.552412 containerd[1851]: time="2025-12-16T03:32:12.552357472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:32:12.552626 containerd[1851]: time="2025-12-16T03:32:12.552373756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:32:12.552921 kubelet[3199]: E1216 03:32:12.552668 3199 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:32:12.552921 kubelet[3199]: E1216 03:32:12.552726 3199 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:32:12.552921 kubelet[3199]: E1216 03:32:12.552870 3199 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzn98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b9fd54f5-5npkd_calico-system(a7663fa2-6c23-44c9-b4f7-07ab48404e5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:32:12.554684 kubelet[3199]: E1216 03:32:12.554454 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9fd54f5-5npkd" podUID="a7663fa2-6c23-44c9-b4f7-07ab48404e5b" Dec 16 03:32:15.995290 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:32:15.995446 kernel: audit: type=1130 audit(1765855935.986:926): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.30.117:22-147.75.109.163:37536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:32:15.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.30.117:22-147.75.109.163:37536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:32:15.987905 systemd[1]: Started sshd@24-172.31.30.117:22-147.75.109.163:37536.service - OpenSSH per-connection server daemon (147.75.109.163:37536). Dec 16 03:32:16.201000 audit[5982]: USER_ACCT pid=5982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:16.205725 sshd-session[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:16.208731 sshd[5982]: Accepted publickey for core from 147.75.109.163 port 37536 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:32:16.201000 audit[5982]: CRED_ACQ pid=5982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:16.213192 kernel: audit: type=1101 audit(1765855936.201:927): pid=5982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:16.213728 kernel: audit: type=1103 audit(1765855936.201:928): pid=5982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:16.222358 systemd-logind[1837]: New session 26 of user core. Dec 16 03:32:16.201000 audit[5982]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb8b3b990 a2=3 a3=0 items=0 ppid=1 pid=5982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:16.229737 kernel: audit: type=1006 audit(1765855936.201:929): pid=5982 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 03:32:16.229872 kernel: audit: type=1300 audit(1765855936.201:929): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb8b3b990 a2=3 a3=0 items=0 ppid=1 pid=5982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:16.201000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:32:16.235068 kernel: audit: type=1327 audit(1765855936.201:929): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:32:16.236692 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 03:32:16.242000 audit[5982]: USER_START pid=5982 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:16.245000 audit[5986]: CRED_ACQ pid=5986 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:16.252150 kernel: audit: type=1105 audit(1765855936.242:930): pid=5982 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:16.252276 kernel: audit: type=1103 audit(1765855936.245:931): pid=5986 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:16.452780 sshd[5986]: Connection closed by 147.75.109.163 port 37536 Dec 16 03:32:16.453470 sshd-session[5982]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:16.456000 audit[5982]: USER_END pid=5982 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:16.465232 kernel: audit: type=1106 audit(1765855936.456:932): pid=5982 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:16.466432 systemd[1]: sshd@24-172.31.30.117:22-147.75.109.163:37536.service: Deactivated successfully. Dec 16 03:32:16.469599 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 03:32:16.456000 audit[5982]: CRED_DISP pid=5982 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:16.478886 kernel: audit: type=1104 audit(1765855936.456:933): pid=5982 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:16.478357 systemd-logind[1837]: Session 26 logged out. Waiting for processes to exit. Dec 16 03:32:16.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.30.117:22-147.75.109.163:37536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:32:16.485549 systemd-logind[1837]: Removed session 26. Dec 16 03:32:17.381089 kubelet[3199]: E1216 03:32:17.381031 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-95jjv" podUID="1598e409-2173-4fc3-8415-b507d5511623" Dec 16 03:32:18.382602 kubelet[3199]: E1216 03:32:18.382482 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:32:21.491629 systemd[1]: Started sshd@25-172.31.30.117:22-147.75.109.163:37552.service - OpenSSH per-connection server daemon (147.75.109.163:37552). Dec 16 03:32:21.498602 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:32:21.498688 kernel: audit: type=1130 audit(1765855941.490:935): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.30.117:22-147.75.109.163:37552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:32:21.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.30.117:22-147.75.109.163:37552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:32:21.683901 sshd[5998]: Accepted publickey for core from 147.75.109.163 port 37552 ssh2: RSA SHA256:uBaM3bm+zj9/JbIwwGUAopeUEAA3QkIwB8PfzkI0BSc Dec 16 03:32:21.682000 audit[5998]: USER_ACCT pid=5998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:21.688154 sshd-session[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:32:21.691419 kernel: audit: type=1101 audit(1765855941.682:936): pid=5998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:21.686000 audit[5998]: CRED_ACQ pid=5998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:21.702155 kernel: audit: type=1103 audit(1765855941.686:937): pid=5998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:21.709206 kernel: audit: type=1006 audit(1765855941.686:938): pid=5998 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 03:32:21.686000 audit[5998]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff670cc380 a2=3 a3=0 items=0 ppid=1 pid=5998 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:21.712230 systemd-logind[1837]: New session 27 of user core. Dec 16 03:32:21.717191 kernel: audit: type=1300 audit(1765855941.686:938): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff670cc380 a2=3 a3=0 items=0 ppid=1 pid=5998 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:21.686000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:32:21.720493 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 03:32:21.722413 kernel: audit: type=1327 audit(1765855941.686:938): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:32:21.728000 audit[5998]: USER_START pid=5998 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:21.736268 kernel: audit: type=1105 audit(1765855941.728:939): pid=5998 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:21.730000 audit[6002]: CRED_ACQ pid=6002 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:21.744202 kernel: audit: type=1103 audit(1765855941.730:940): pid=6002 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:21.916218 sshd[6002]: Connection closed by 147.75.109.163 port 37552 Dec 16 03:32:21.918465 sshd-session[5998]: pam_unix(sshd:session): session closed for user core Dec 16 03:32:21.918000 audit[5998]: USER_END pid=5998 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:21.927084 kernel: audit: type=1106 audit(1765855941.918:941): pid=5998 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:21.923043 systemd-logind[1837]: Session 27 logged out. Waiting for processes to exit. Dec 16 03:32:21.925143 systemd[1]: sshd@25-172.31.30.117:22-147.75.109.163:37552.service: Deactivated successfully. Dec 16 03:32:21.932480 kernel: audit: type=1104 audit(1765855941.919:942): pid=5998 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:21.919000 audit[5998]: CRED_DISP pid=5998 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Dec 16 03:32:21.928493 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 03:32:21.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.30.117:22-147.75.109.163:37552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:32:21.930891 systemd-logind[1837]: Removed session 27. Dec 16 03:32:24.380628 kubelet[3199]: E1216 03:32:24.380579 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" podUID="0feff437-5720-498b-a3c1-fbae9f5f245c" Dec 16 03:32:24.381405 kubelet[3199]: E1216 03:32:24.381371 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" podUID="69c31889-528a-4f2f-822c-6d89b94291a9" Dec 16 03:32:25.389210 kubelet[3199]: E1216 03:32:25.388943 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9fd54f5-5npkd" podUID="a7663fa2-6c23-44c9-b4f7-07ab48404e5b" Dec 16 03:32:27.381023 kubelet[3199]: E1216 03:32:27.380351 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" podUID="e32d83e7-8260-42ce-a13a-b8e2a7a65181" Dec 16 03:32:30.380323 kubelet[3199]: E1216 03:32:30.380270 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:32:31.380255 kubelet[3199]: E1216 03:32:31.379945 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-95jjv" podUID="1598e409-2173-4fc3-8415-b507d5511623" Dec 16 03:32:35.379187 kubelet[3199]: E1216 03:32:35.379040 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" podUID="0feff437-5720-498b-a3c1-fbae9f5f245c" Dec 16 03:32:36.316772 systemd[1]: cri-containerd-c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267.scope: Deactivated successfully. Dec 16 03:32:36.317540 systemd[1]: cri-containerd-c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267.scope: Consumed 23.742s CPU time, 110M memory peak, 49.9M read from disk. Dec 16 03:32:36.324664 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:32:36.325046 kernel: audit: type=1334 audit(1765855956.320:944): prog-id=156 op=UNLOAD Dec 16 03:32:36.320000 audit: BPF prog-id=156 op=UNLOAD Dec 16 03:32:36.320000 audit: BPF prog-id=160 op=UNLOAD Dec 16 03:32:36.327386 kernel: audit: type=1334 audit(1765855956.320:945): prog-id=160 op=UNLOAD Dec 16 03:32:36.389598 containerd[1851]: time="2025-12-16T03:32:36.389541798Z" level=info msg="received container exit event container_id:\"c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267\" id:\"c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267\" pid:3786 exit_status:1 exited_at:{seconds:1765855956 nanos:322045847}" Dec 16 03:32:36.510425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267-rootfs.mount: Deactivated successfully. Dec 16 03:32:37.379932 kubelet[3199]: E1216 03:32:37.379825 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" podUID="69c31889-528a-4f2f-822c-6d89b94291a9" Dec 16 03:32:37.441599 kubelet[3199]: I1216 03:32:37.441525 3199 scope.go:117] "RemoveContainer" containerID="c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267" Dec 16 03:32:37.472960 containerd[1851]: time="2025-12-16T03:32:37.472906378Z" level=info msg="CreateContainer within sandbox \"c42cf9093ecae5d8447be45b33b1a53d014ce2821f085946f774d851dad93e85\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 03:32:37.500478 containerd[1851]: time="2025-12-16T03:32:37.499144244Z" level=info msg="Container 9c6147f3106663b8d9b73190725cd153f43f6e0e7d649aa54770a848ae617866: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:32:37.518606 containerd[1851]: time="2025-12-16T03:32:37.518475648Z" level=info msg="CreateContainer within sandbox \"c42cf9093ecae5d8447be45b33b1a53d014ce2821f085946f774d851dad93e85\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9c6147f3106663b8d9b73190725cd153f43f6e0e7d649aa54770a848ae617866\"" Dec 16 03:32:37.519568 containerd[1851]: time="2025-12-16T03:32:37.519005549Z" level=info msg="StartContainer for \"9c6147f3106663b8d9b73190725cd153f43f6e0e7d649aa54770a848ae617866\"" Dec 16 03:32:37.520352 containerd[1851]: time="2025-12-16T03:32:37.520326123Z" level=info msg="connecting to shim 9c6147f3106663b8d9b73190725cd153f43f6e0e7d649aa54770a848ae617866" address="unix:///run/containerd/s/38a2b3ab94e5efe1b7a0d0590c4796fc4f2e50cf518df4ed65a0df00a964ba52" protocol=ttrpc version=3 Dec 16 03:32:37.551556 systemd[1]: Started cri-containerd-9c6147f3106663b8d9b73190725cd153f43f6e0e7d649aa54770a848ae617866.scope - libcontainer container 9c6147f3106663b8d9b73190725cd153f43f6e0e7d649aa54770a848ae617866. Dec 16 03:32:37.573000 audit: BPF prog-id=266 op=LOAD Dec 16 03:32:37.576250 kernel: audit: type=1334 audit(1765855957.573:946): prog-id=266 op=LOAD Dec 16 03:32:37.575000 audit: BPF prog-id=267 op=LOAD Dec 16 03:32:37.575000 audit[6053]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3325 pid=6053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:37.580607 kernel: audit: type=1334 audit(1765855957.575:947): prog-id=267 op=LOAD Dec 16 03:32:37.580757 kernel: audit: type=1300 audit(1765855957.575:947): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3325 pid=6053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:37.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363134376633313036363633623864396237333139303732356364 Dec 16 03:32:37.587475 kernel: audit: type=1327 audit(1765855957.575:947): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363134376633313036363633623864396237333139303732356364 Dec 16 03:32:37.575000 audit: BPF prog-id=267 op=UNLOAD Dec 16 03:32:37.591933 kernel: audit: type=1334 audit(1765855957.575:948): prog-id=267 op=UNLOAD Dec 16 03:32:37.575000 audit[6053]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3325 pid=6053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:37.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363134376633313036363633623864396237333139303732356364 Dec 16 03:32:37.603157 kernel: audit: type=1300 audit(1765855957.575:948): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3325 pid=6053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:37.603367 kernel: audit: type=1327 audit(1765855957.575:948): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363134376633313036363633623864396237333139303732356364 Dec 16 03:32:37.579000 audit: BPF prog-id=268 op=LOAD Dec 16 03:32:37.611385 kernel: audit: type=1334 audit(1765855957.579:949): prog-id=268 op=LOAD Dec 16 03:32:37.579000 audit[6053]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3325 pid=6053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:37.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363134376633313036363633623864396237333139303732356364 Dec 16 03:32:37.579000 audit: BPF prog-id=269 op=LOAD Dec 16 03:32:37.579000 audit[6053]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3325 pid=6053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:37.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363134376633313036363633623864396237333139303732356364 Dec 16 03:32:37.579000 audit: BPF prog-id=269 op=UNLOAD Dec 16 03:32:37.579000 audit[6053]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3325 pid=6053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:37.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363134376633313036363633623864396237333139303732356364 Dec 16 03:32:37.579000 audit: BPF prog-id=268 op=UNLOAD Dec 16 03:32:37.579000 audit[6053]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3325 pid=6053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:37.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363134376633313036363633623864396237333139303732356364 Dec 16 03:32:37.579000 audit: BPF prog-id=270 op=LOAD Dec 16 03:32:37.579000 audit[6053]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3325 pid=6053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:37.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363134376633313036363633623864396237333139303732356364 Dec 16 03:32:37.643332 containerd[1851]: time="2025-12-16T03:32:37.643076044Z" level=info msg="StartContainer for \"9c6147f3106663b8d9b73190725cd153f43f6e0e7d649aa54770a848ae617866\" returns successfully" Dec 16 03:32:37.672811 systemd[1]: cri-containerd-3c85e94b6817c24fb53169144aa8695d1934d54cc7851c0e648a975c8c7cb1d7.scope: Deactivated successfully. Dec 16 03:32:37.673629 systemd[1]: cri-containerd-3c85e94b6817c24fb53169144aa8695d1934d54cc7851c0e648a975c8c7cb1d7.scope: Consumed 5.227s CPU time, 102.7M memory peak, 90.9M read from disk. Dec 16 03:32:37.673000 audit: BPF prog-id=271 op=LOAD Dec 16 03:32:37.673000 audit: BPF prog-id=103 op=UNLOAD Dec 16 03:32:37.675000 audit: BPF prog-id=108 op=UNLOAD Dec 16 03:32:37.675000 audit: BPF prog-id=112 op=UNLOAD Dec 16 03:32:37.678043 containerd[1851]: time="2025-12-16T03:32:37.678002780Z" level=info msg="received container exit event container_id:\"3c85e94b6817c24fb53169144aa8695d1934d54cc7851c0e648a975c8c7cb1d7\" id:\"3c85e94b6817c24fb53169144aa8695d1934d54cc7851c0e648a975c8c7cb1d7\" pid:3013 exit_status:1 exited_at:{seconds:1765855957 nanos:677544464}" Dec 16 03:32:37.718974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c85e94b6817c24fb53169144aa8695d1934d54cc7851c0e648a975c8c7cb1d7-rootfs.mount: Deactivated successfully. Dec 16 03:32:38.206643 kubelet[3199]: E1216 03:32:38.206302 3199 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-117?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 03:32:38.436835 kubelet[3199]: I1216 03:32:38.436795 3199 scope.go:117] "RemoveContainer" containerID="3c85e94b6817c24fb53169144aa8695d1934d54cc7851c0e648a975c8c7cb1d7" Dec 16 03:32:38.439106 containerd[1851]: time="2025-12-16T03:32:38.439068865Z" level=info msg="CreateContainer within sandbox \"4bb68c9a2a4d5431be045979c0ca127ed2a7f9bf3fcca716fff9e21590bc2820\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 03:32:38.487299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360064142.mount: Deactivated successfully. Dec 16 03:32:38.535226 containerd[1851]: time="2025-12-16T03:32:38.533447923Z" level=info msg="Container 49d13f41eedadc93261b7e46906c319074bc1c81cd24acbb7877f57dbbc9261e: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:32:38.551568 containerd[1851]: time="2025-12-16T03:32:38.551525394Z" level=info msg="CreateContainer within sandbox \"4bb68c9a2a4d5431be045979c0ca127ed2a7f9bf3fcca716fff9e21590bc2820\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"49d13f41eedadc93261b7e46906c319074bc1c81cd24acbb7877f57dbbc9261e\"" Dec 16 03:32:38.552382 containerd[1851]: time="2025-12-16T03:32:38.552344411Z" level=info msg="StartContainer for \"49d13f41eedadc93261b7e46906c319074bc1c81cd24acbb7877f57dbbc9261e\"" Dec 16 03:32:38.554003 containerd[1851]: time="2025-12-16T03:32:38.553965203Z" level=info msg="connecting to shim 49d13f41eedadc93261b7e46906c319074bc1c81cd24acbb7877f57dbbc9261e" address="unix:///run/containerd/s/03f9ab29871e0e9600dcdbe26a5c595610e1c45f7e4bfbd288a58958584777e0" protocol=ttrpc version=3 Dec 16 03:32:38.586499 systemd[1]: Started cri-containerd-49d13f41eedadc93261b7e46906c319074bc1c81cd24acbb7877f57dbbc9261e.scope - libcontainer container 49d13f41eedadc93261b7e46906c319074bc1c81cd24acbb7877f57dbbc9261e. Dec 16 03:32:38.605000 audit: BPF prog-id=272 op=LOAD Dec 16 03:32:38.605000 audit: BPF prog-id=273 op=LOAD Dec 16 03:32:38.605000 audit[6095]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2877 pid=6095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:38.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643133663431656564616463393332363162376534363930366333 Dec 16 03:32:38.605000 audit: BPF prog-id=273 op=UNLOAD Dec 16 03:32:38.605000 audit[6095]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=6095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:38.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643133663431656564616463393332363162376534363930366333 Dec 16 03:32:38.605000 audit: BPF prog-id=274 op=LOAD Dec 16 03:32:38.605000 audit[6095]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2877 pid=6095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:38.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643133663431656564616463393332363162376534363930366333 Dec 16 03:32:38.605000 audit: BPF prog-id=275 op=LOAD Dec 16 03:32:38.605000 audit[6095]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2877 pid=6095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:38.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643133663431656564616463393332363162376534363930366333 Dec 16 03:32:38.605000 audit: BPF prog-id=275 op=UNLOAD Dec 16 03:32:38.605000 audit[6095]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=6095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:38.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643133663431656564616463393332363162376534363930366333 Dec 16 03:32:38.605000 audit: BPF prog-id=274 op=UNLOAD Dec 16 03:32:38.605000 audit[6095]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=6095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:38.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643133663431656564616463393332363162376534363930366333 Dec 16 03:32:38.605000 audit: BPF prog-id=276 op=LOAD Dec 16 03:32:38.605000 audit[6095]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2877 pid=6095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:38.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643133663431656564616463393332363162376534363930366333 Dec 16 03:32:38.657313 containerd[1851]: time="2025-12-16T03:32:38.657265897Z" level=info msg="StartContainer for \"49d13f41eedadc93261b7e46906c319074bc1c81cd24acbb7877f57dbbc9261e\" returns successfully" Dec 16 03:32:40.380014 kubelet[3199]: E1216 03:32:40.379824 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" podUID="e32d83e7-8260-42ce-a13a-b8e2a7a65181" Dec 16 03:32:40.381270 kubelet[3199]: E1216 03:32:40.381076 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9fd54f5-5npkd" podUID="a7663fa2-6c23-44c9-b4f7-07ab48404e5b" Dec 16 03:32:42.025065 systemd[1]: cri-containerd-8975df7704c92e3764f92d66ac0bb04bba34f7ec7c8e828b70435395fd5602c4.scope: Deactivated successfully. Dec 16 03:32:42.025515 systemd[1]: cri-containerd-8975df7704c92e3764f92d66ac0bb04bba34f7ec7c8e828b70435395fd5602c4.scope: Consumed 3.673s CPU time, 41.4M memory peak, 44.1M read from disk. Dec 16 03:32:42.030039 kernel: kauditd_printk_skb: 40 callbacks suppressed Dec 16 03:32:42.030158 kernel: audit: type=1334 audit(1765855962.025:966): prog-id=277 op=LOAD Dec 16 03:32:42.025000 audit: BPF prog-id=277 op=LOAD Dec 16 03:32:42.032706 kernel: audit: type=1334 audit(1765855962.027:967): prog-id=93 op=UNLOAD Dec 16 03:32:42.027000 audit: BPF prog-id=93 op=UNLOAD Dec 16 03:32:42.033661 containerd[1851]: time="2025-12-16T03:32:42.033349443Z" level=info msg="received container exit event container_id:\"8975df7704c92e3764f92d66ac0bb04bba34f7ec7c8e828b70435395fd5602c4\" id:\"8975df7704c92e3764f92d66ac0bb04bba34f7ec7c8e828b70435395fd5602c4\" pid:3044 exit_status:1 exited_at:{seconds:1765855962 nanos:32159315}" Dec 16 03:32:42.031000 audit: BPF prog-id=118 op=UNLOAD Dec 16 03:32:42.037121 kernel: audit: type=1334 audit(1765855962.031:968): prog-id=118 op=UNLOAD Dec 16 03:32:42.037221 kernel: audit: type=1334 audit(1765855962.031:969): prog-id=122 op=UNLOAD Dec 16 03:32:42.031000 audit: BPF prog-id=122 op=UNLOAD Dec 16 03:32:42.070037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8975df7704c92e3764f92d66ac0bb04bba34f7ec7c8e828b70435395fd5602c4-rootfs.mount: Deactivated successfully. Dec 16 03:32:42.454877 kubelet[3199]: I1216 03:32:42.454644 3199 scope.go:117] "RemoveContainer" containerID="8975df7704c92e3764f92d66ac0bb04bba34f7ec7c8e828b70435395fd5602c4" Dec 16 03:32:42.457375 containerd[1851]: time="2025-12-16T03:32:42.457338398Z" level=info msg="CreateContainer within sandbox \"5e02ac517055bd7876da8b8a04ce0ffefdab2d35b887c11b2781867c92d2cdee\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 03:32:42.478849 containerd[1851]: time="2025-12-16T03:32:42.477687517Z" level=info msg="Container 2bf1c7dc8f7fe32fb424535642023913cc613b39be6d291239410c3ec06d2104: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:32:42.491976 containerd[1851]: time="2025-12-16T03:32:42.491908329Z" level=info msg="CreateContainer within sandbox \"5e02ac517055bd7876da8b8a04ce0ffefdab2d35b887c11b2781867c92d2cdee\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2bf1c7dc8f7fe32fb424535642023913cc613b39be6d291239410c3ec06d2104\"" Dec 16 03:32:42.492596 containerd[1851]: time="2025-12-16T03:32:42.492561291Z" level=info msg="StartContainer for \"2bf1c7dc8f7fe32fb424535642023913cc613b39be6d291239410c3ec06d2104\"" Dec 16 03:32:42.493851 containerd[1851]: time="2025-12-16T03:32:42.493814809Z" level=info msg="connecting to shim 2bf1c7dc8f7fe32fb424535642023913cc613b39be6d291239410c3ec06d2104" address="unix:///run/containerd/s/85392e2c4e110945446e0895f0647c3ebdd7f31ed71aeca4118f536fa72a3773" protocol=ttrpc version=3 Dec 16 03:32:42.521465 systemd[1]: Started cri-containerd-2bf1c7dc8f7fe32fb424535642023913cc613b39be6d291239410c3ec06d2104.scope - libcontainer container 2bf1c7dc8f7fe32fb424535642023913cc613b39be6d291239410c3ec06d2104. Dec 16 03:32:42.539000 audit: BPF prog-id=278 op=LOAD Dec 16 03:32:42.543283 kernel: audit: type=1334 audit(1765855962.539:970): prog-id=278 op=LOAD Dec 16 03:32:42.542000 audit: BPF prog-id=279 op=LOAD Dec 16 03:32:42.542000 audit[6141]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2874 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:42.546189 kernel: audit: type=1334 audit(1765855962.542:971): prog-id=279 op=LOAD Dec 16 03:32:42.546262 kernel: audit: type=1300 audit(1765855962.542:971): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2874 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:42.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663163376463386637666533326662343234353335363432303233 Dec 16 03:32:42.542000 audit: BPF prog-id=279 op=UNLOAD Dec 16 03:32:42.556654 kernel: audit: type=1327 audit(1765855962.542:971): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663163376463386637666533326662343234353335363432303233 Dec 16 03:32:42.556758 kernel: audit: type=1334 audit(1765855962.542:972): prog-id=279 op=UNLOAD Dec 16 03:32:42.542000 audit[6141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:42.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663163376463386637666533326662343234353335363432303233 Dec 16 03:32:42.564231 kernel: audit: type=1300 audit(1765855962.542:972): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:42.545000 audit: BPF prog-id=280 op=LOAD Dec 16 03:32:42.545000 audit[6141]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2874 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:42.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663163376463386637666533326662343234353335363432303233 Dec 16 03:32:42.545000 audit: BPF prog-id=281 op=LOAD Dec 16 03:32:42.545000 audit[6141]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2874 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:42.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663163376463386637666533326662343234353335363432303233 Dec 16 03:32:42.545000 audit: BPF prog-id=281 op=UNLOAD Dec 16 03:32:42.545000 audit[6141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:42.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663163376463386637666533326662343234353335363432303233 Dec 16 03:32:42.545000 audit: BPF prog-id=280 op=UNLOAD Dec 16 03:32:42.545000 audit[6141]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:42.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663163376463386637666533326662343234353335363432303233 Dec 16 03:32:42.545000 audit: BPF prog-id=282 op=LOAD Dec 16 03:32:42.545000 audit[6141]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2874 pid=6141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:32:42.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663163376463386637666533326662343234353335363432303233 Dec 16 03:32:42.600405 containerd[1851]: time="2025-12-16T03:32:42.600350360Z" level=info msg="StartContainer for \"2bf1c7dc8f7fe32fb424535642023913cc613b39be6d291239410c3ec06d2104\" returns successfully" Dec 16 03:32:43.381338 kubelet[3199]: E1216 03:32:43.381294 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-95jjv" podUID="1598e409-2173-4fc3-8415-b507d5511623" Dec 16 03:32:45.380202 kubelet[3199]: E1216 03:32:45.380074 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pl4wm" podUID="39c8b25e-ea78-443a-855e-e43746267826" Dec 16 03:32:48.206678 kubelet[3199]: E1216 03:32:48.206626 3199 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-117?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 03:32:48.379948 kubelet[3199]: E1216 03:32:48.379898 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b698bdfc8-x42pm" podUID="69c31889-528a-4f2f-822c-6d89b94291a9" Dec 16 03:32:49.379931 kubelet[3199]: E1216 03:32:49.379758 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-dv5td" podUID="0feff437-5720-498b-a3c1-fbae9f5f245c" Dec 16 03:32:50.339358 systemd[1]: cri-containerd-9c6147f3106663b8d9b73190725cd153f43f6e0e7d649aa54770a848ae617866.scope: Deactivated successfully. Dec 16 03:32:50.340246 systemd[1]: cri-containerd-9c6147f3106663b8d9b73190725cd153f43f6e0e7d649aa54770a848ae617866.scope: Consumed 361ms CPU time, 69.1M memory peak, 34M read from disk. Dec 16 03:32:50.342085 containerd[1851]: time="2025-12-16T03:32:50.340302034Z" level=info msg="received container exit event container_id:\"9c6147f3106663b8d9b73190725cd153f43f6e0e7d649aa54770a848ae617866\" id:\"9c6147f3106663b8d9b73190725cd153f43f6e0e7d649aa54770a848ae617866\" pid:6066 exit_status:1 exited_at:{seconds:1765855970 nanos:339787088}" Dec 16 03:32:50.346386 kernel: kauditd_printk_skb: 16 callbacks suppressed Dec 16 03:32:50.346526 kernel: audit: type=1334 audit(1765855970.342:978): prog-id=266 op=UNLOAD Dec 16 03:32:50.342000 audit: BPF prog-id=266 op=UNLOAD Dec 16 03:32:50.342000 audit: BPF prog-id=270 op=UNLOAD Dec 16 03:32:50.349271 kernel: audit: type=1334 audit(1765855970.342:979): prog-id=270 op=UNLOAD Dec 16 03:32:50.371299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c6147f3106663b8d9b73190725cd153f43f6e0e7d649aa54770a848ae617866-rootfs.mount: Deactivated successfully. Dec 16 03:32:50.494349 kubelet[3199]: I1216 03:32:50.494301 3199 scope.go:117] "RemoveContainer" containerID="c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267" Dec 16 03:32:50.494986 kubelet[3199]: I1216 03:32:50.494505 3199 scope.go:117] "RemoveContainer" containerID="9c6147f3106663b8d9b73190725cd153f43f6e0e7d649aa54770a848ae617866" Dec 16 03:32:50.494986 kubelet[3199]: E1216 03:32:50.494792 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-hl7hh_tigera-operator(5e4b73a4-aabb-49da-b722-a97df77eacc0)\"" pod="tigera-operator/tigera-operator-7dcd859c48-hl7hh" podUID="5e4b73a4-aabb-49da-b722-a97df77eacc0" Dec 16 03:32:50.520245 containerd[1851]: time="2025-12-16T03:32:50.520202495Z" level=info msg="RemoveContainer for \"c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267\"" Dec 16 03:32:50.588625 containerd[1851]: time="2025-12-16T03:32:50.588573305Z" level=info msg="RemoveContainer for \"c5515f9677a6228b55d852b04660f65a41b90e8100caef6a1299c76a503b4267\" returns successfully" Dec 16 03:32:54.379749 kubelet[3199]: E1216 03:32:54.379641 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f46b988d5-xtbpb" podUID="e32d83e7-8260-42ce-a13a-b8e2a7a65181" Dec 16 03:32:55.380116 kubelet[3199]: E1216 03:32:55.379946 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-95jjv" podUID="1598e409-2173-4fc3-8415-b507d5511623" Dec 16 03:32:55.381048 kubelet[3199]: E1216 03:32:55.380993 3199 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b9fd54f5-5npkd" podUID="a7663fa2-6c23-44c9-b4f7-07ab48404e5b"